Test Report: Docker_Linux_containerd 14483

                    
                      eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6:2022-07-01:24700
                    
                

Test fail (9/279)

x
+
TestKubernetesUpgrade (566.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (53.521276816s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220701225105-10066
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220701225105-10066: (1.451591108s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220701225105-10066 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220701225105-10066 status --format={{.Host}}: exit status 7 (143.53982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m27.238269737s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220701225105-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20220701225105-10066 in cluster kubernetes-upgrade-20220701225105-10066
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20220701225105-10066" ...
	* Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 22:52:00.949558  160696 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:52:00.949689  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:52:00.949695  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:52:00.949702  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:52:00.950239  160696 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 22:52:00.950529  160696 out.go:303] Setting JSON to false
	I0701 22:52:00.975830  160696 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2074,"bootTime":1656713847,"procs":556,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 22:52:00.975934  160696 start.go:125] virtualization: kvm guest
	I0701 22:52:00.978967  160696 out.go:177] * [kubernetes-upgrade-20220701225105-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 22:52:00.981051  160696 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 22:52:00.980969  160696 notify.go:193] Checking for updates...
	I0701 22:52:00.986459  160696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 22:52:00.988066  160696 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:52:00.989522  160696 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 22:52:00.990790  160696 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 22:52:00.992493  160696 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225105-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 22:52:00.992908  160696 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 22:52:01.050438  160696 docker.go:137] docker version: linux-20.10.17
	I0701 22:52:01.050581  160696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:52:01.231659  160696 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:86 SystemTime:2022-07-01 22:52:01.093554422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:52:01.231800  160696 docker.go:254] overlay module found
	I0701 22:52:01.234069  160696 out.go:177] * Using the docker driver based on existing profile
	I0701 22:52:01.235709  160696 start.go:284] selected driver: docker
	I0701 22:52:01.235725  160696 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220701225105-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220701225105-
10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:52:01.235870  160696 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 22:52:01.236992  160696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:52:01.407033  160696 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:86 SystemTime:2022-07-01 22:52:01.282017434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:52:01.407274  160696 cni.go:95] Creating CNI manager for ""
	I0701 22:52:01.407290  160696 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 22:52:01.407306  160696 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220701225105-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220701225105-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:52:01.410268  160696 out.go:177] * Starting control plane node kubernetes-upgrade-20220701225105-10066 in cluster kubernetes-upgrade-20220701225105-10066
	I0701 22:52:01.411526  160696 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 22:52:01.412671  160696 out.go:177] * Pulling base image ...
	I0701 22:52:01.413680  160696 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 22:52:01.413721  160696 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 22:52:01.413734  160696 cache.go:57] Caching tarball of preloaded images
	I0701 22:52:01.413783  160696 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 22:52:01.413962  160696 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 22:52:01.413980  160696 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 22:52:01.414097  160696 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/config.json ...
	I0701 22:52:01.469670  160696 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 22:52:01.469726  160696 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 22:52:01.469739  160696 cache.go:208] Successfully downloaded all kic artifacts
	I0701 22:52:01.469790  160696 start.go:352] acquiring machines lock for kubernetes-upgrade-20220701225105-10066: {Name:mkca4ee4e060684b1a65a01b55d7372a7dadaa9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:52:01.469908  160696 start.go:356] acquired machines lock for "kubernetes-upgrade-20220701225105-10066" in 90.758µs
	I0701 22:52:01.469938  160696 start.go:94] Skipping create...Using existing machine configuration
	I0701 22:52:01.469949  160696 fix.go:55] fixHost starting: 
	I0701 22:52:01.470241  160696 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225105-10066 --format={{.State.Status}}
	I0701 22:52:01.505556  160696 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220701225105-10066: state=Stopped err=<nil>
	W0701 22:52:01.505586  160696 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 22:52:01.507298  160696 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220701225105-10066" ...
	I0701 22:52:01.508421  160696 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220701225105-10066
	I0701 22:52:01.950093  160696 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225105-10066 --format={{.State.Status}}
	I0701 22:52:01.988039  160696 kic.go:416] container "kubernetes-upgrade-20220701225105-10066" state is running.
	I0701 22:52:01.988398  160696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:02.025152  160696 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/config.json ...
	I0701 22:52:02.025400  160696 machine.go:88] provisioning docker machine ...
	I0701 22:52:02.025428  160696 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220701225105-10066"
	I0701 22:52:02.025476  160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:02.065608  160696 main.go:134] libmachine: Using SSH client type: native
	I0701 22:52:02.065841  160696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49338 <nil> <nil>}
	I0701 22:52:02.065866  160696 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220701225105-10066 && echo "kubernetes-upgrade-20220701225105-10066" | sudo tee /etc/hostname
	I0701 22:52:02.066480  160696 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47814->127.0.0.1:49338: read: connection reset by peer
	I0701 22:52:05.200330  160696 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220701225105-10066
	
	I0701 22:52:05.200409  160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:05.245209  160696 main.go:134] libmachine: Using SSH client type: native
	I0701 22:52:05.245419  160696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49338 <nil> <nil>}
	I0701 22:52:05.245457  160696 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220701225105-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220701225105-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220701225105-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 22:52:05.374979  160696 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 22:52:05.375012  160696 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 22:52:05.375048  160696 ubuntu.go:177] setting up certificates
	I0701 22:52:05.375069  160696 provision.go:83] configureAuth start
	I0701 22:52:05.375136  160696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:05.431349  160696 provision.go:138] copyHostCerts
	I0701 22:52:05.431422  160696 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 22:52:05.431434  160696 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 22:52:05.431511  160696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 22:52:05.431625  160696 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 22:52:05.431642  160696 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 22:52:05.431702  160696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 22:52:05.431801  160696 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 22:52:05.431809  160696 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 22:52:05.431844  160696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 22:52:05.431896  160696 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220701225105-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220701225105-10066]
	I0701 22:52:05.512958  160696 provision.go:172] copyRemoteCerts
	I0701 22:52:05.513009  160696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 22:52:05.513046  160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:05.564725  160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
	I0701 22:52:05.656435  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 22:52:05.674626  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0701 22:52:05.691508  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 22:52:05.711010  160696 provision.go:86] duration metric: configureAuth took 335.924506ms
	I0701 22:52:05.711039  160696 ubuntu.go:193] setting minikube options for container-runtime
	I0701 22:52:05.711243  160696 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225105-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:52:05.711260  160696 machine.go:91] provisioned docker machine in 3.685843473s
	I0701 22:52:05.711268  160696 start.go:306] post-start starting for "kubernetes-upgrade-20220701225105-10066" (driver="docker")
	I0701 22:52:05.711281  160696 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 22:52:05.711328  160696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 22:52:05.711368  160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:05.746101  160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
	I0701 22:52:05.834588  160696 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 22:52:05.837470  160696 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 22:52:05.837501  160696 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 22:52:05.837515  160696 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 22:52:05.837523  160696 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 22:52:05.837533  160696 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 22:52:05.837599  160696 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 22:52:05.837688  160696 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 22:52:05.837792  160696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 22:52:05.845188  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 22:52:05.862700  160696 start.go:309] post-start completed in 151.416144ms
	I0701 22:52:05.862755  160696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 22:52:05.862798  160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:05.896029  160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
	I0701 22:52:05.979023  160696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 22:52:05.982893  160696 fix.go:57] fixHost completed within 4.512942606s
	I0701 22:52:05.982912  160696 start.go:81] releasing machines lock for "kubernetes-upgrade-20220701225105-10066", held for 4.512990385s
	I0701 22:52:05.983019  160696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:06.020989  160696 ssh_runner.go:195] Run: systemctl --version
	I0701 22:52:06.021040  160696 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 22:52:06.021053  160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:06.021103  160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
	I0701 22:52:06.076141  160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
	I0701 22:52:06.076429  160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
	I0701 22:52:06.184470  160696 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 22:52:06.198509  160696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 22:52:06.209942  160696 docker.go:179] disabling docker service ...
	I0701 22:52:06.210000  160696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 22:52:06.220216  160696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 22:52:06.231773  160696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 22:52:06.327570  160696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 22:52:06.422909  160696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 22:52:06.434878  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 22:52:06.450435  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 22:52:06.460554  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 22:52:06.470974  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 22:52:06.482039  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 22:52:06.492195  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 22:52:06.502060  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 22:52:06.516711  160696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 22:52:06.527755  160696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 22:52:06.534150  160696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 22:52:06.628808  160696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 22:52:06.751081  160696 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 22:52:06.751155  160696 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 22:52:06.755655  160696 start.go:471] Will wait 60s for crictl version
	I0701 22:52:06.755726  160696 ssh_runner.go:195] Run: sudo crictl version
	I0701 22:52:06.807504  160696 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 22:52:06.807554  160696 ssh_runner.go:195] Run: containerd --version
	I0701 22:52:06.842805  160696 ssh_runner.go:195] Run: containerd --version
	I0701 22:52:06.880735  160696 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 22:52:06.881945  160696 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220701225105-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 22:52:06.917815  160696 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 22:52:06.921307  160696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 22:52:06.935690  160696 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0701 22:52:06.937480  160696 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 22:52:06.937551  160696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 22:52:06.965301  160696 containerd.go:543] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.2". assuming images are not preloaded.
	I0701 22:52:06.965357  160696 ssh_runner.go:195] Run: which lz4
	I0701 22:52:06.968470  160696 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0701 22:52:06.971551  160696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0701 22:52:06.971578  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (447741112 bytes)
	I0701 22:52:07.937148  160696 containerd.go:490] Took 0.968708 seconds to copy over tarball
	I0701 22:52:07.937211  160696 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0701 22:52:11.987915  160696 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.05068032s)
	I0701 22:52:11.987948  160696 containerd.go:497] Took 4.050772 seconds t extract the tarball
	I0701 22:52:11.987960  160696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0701 22:52:12.182505  160696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 22:52:12.265856  160696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 22:52:12.348706  160696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 22:52:12.376428  160696 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.2 k8s.gcr.io/kube-controller-manager:v1.24.2 k8s.gcr.io/kube-scheduler:v1.24.2 k8s.gcr.io/kube-proxy:v1.24.2 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0701 22:52:12.376514  160696 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:52:12.376531  160696 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0701 22:52:12.376559  160696 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0701 22:52:12.376565  160696 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0701 22:52:12.376577  160696 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0701 22:52:12.376589  160696 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0701 22:52:12.376532  160696 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.2
	I0701 22:52:12.376727  160696 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0701 22:52:12.378085  160696 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.2: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0701 22:52:12.378099  160696 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0701 22:52:12.378110  160696 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.2: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.2
	I0701 22:52:12.378087  160696 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.2: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0701 22:52:12.378088  160696 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0701 22:52:12.378120  160696 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0701 22:52:12.378157  160696 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.2: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0701 22:52:12.378088  160696 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:52:12.601423  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0701 22:52:12.601711  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.2"
	I0701 22:52:12.601886  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.2"
	I0701 22:52:12.603244  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.2"
	I0701 22:52:12.622981  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.2"
	I0701 22:52:12.646441  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0701 22:52:12.648175  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0701 22:52:12.691839  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0701 22:52:13.527303  160696 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.2" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.2" does not exist at hash "34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df" in container runtime
	I0701 22:52:13.527358  160696 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0701 22:52:13.527403  160696 ssh_runner.go:195] Run: which crictl
	I0701 22:52:13.527487  160696 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0701 22:52:13.527517  160696 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0701 22:52:13.527549  160696 ssh_runner.go:195] Run: which crictl
	I0701 22:52:13.527664  160696 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.2" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.2" does not exist at hash "a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536" in container runtime
	I0701 22:52:13.527706  160696 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.2
	I0701 22:52:13.527738  160696 ssh_runner.go:195] Run: which crictl
	I0701 22:52:13.531394  160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.2
	I0701 22:52:13.531455  160696 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.2" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.2" does not exist at hash "5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac" in container runtime
	I0701 22:52:13.531490  160696 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0701 22:52:13.531547  160696 ssh_runner.go:195] Run: which crictl
	I0701 22:52:13.626875  160696 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0701 22:52:13.626929  160696 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0701 22:52:13.626969  160696 ssh_runner.go:195] Run: which crictl
	I0701 22:52:13.631737  160696 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0701 22:52:13.631774  160696 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0701 22:52:13.631808  160696 ssh_runner.go:195] Run: which crictl
	I0701 22:52:13.639414  160696 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.2": (1.036134584s)
	I0701 22:52:13.639444  160696 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.2" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.2" does not exist at hash "d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503" in container runtime
	I0701 22:52:13.639466  160696 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0701 22:52:13.639504  160696 ssh_runner.go:195] Run: which crictl
	I0701 22:52:13.639512  160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0701 22:52:13.639554  160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.2
	I0701 22:52:13.639564  160696 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0701 22:52:13.639596  160696 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:52:13.639628  160696 ssh_runner.go:195] Run: which crictl
	I0701 22:52:14.500939  160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.2
	I0701 22:52:14.500949  160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2
	I0701 22:52:14.501045  160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0701 22:52:14.501091  160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0701 22:52:14.501121  160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0701 22:52:14.501163  160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.2
	I0701 22:52:14.516569  160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2
	I0701 22:52:14.516675  160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.2
	I0701 22:52:14.520642  160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0701 22:52:14.520810  160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0701 22:52:14.520885  160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:52:14.782487  160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2
	I0701 22:52:14.782518  160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0701 22:52:14.782621  160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0701 22:52:14.782639  160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0701 22:52:14.784461  160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.24.2': No such file or directory
	I0701 22:52:14.784490  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 --> /var/lib/minikube/images/kube-controller-manager_v1.24.2 (31037952 bytes)
	I0701 22:52:14.784594  160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0701 22:52:14.784661  160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0701 22:52:14.784741  160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2
	I0701 22:52:14.784803  160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0701 22:52:14.784894  160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.24.2': No such file or directory
	I0701 22:52:14.784912  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 --> /var/lib/minikube/images/kube-proxy_v1.24.2 (39518208 bytes)
	I0701 22:52:14.784991  160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0701 22:52:14.785057  160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0701 22:52:14.785119  160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0701 22:52:14.785140  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0701 22:52:14.791533  160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0701 22:52:14.791534  160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.24.2': No such file or directory
	I0701 22:52:14.791564  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0701 22:52:14.791583  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 --> /var/lib/minikube/images/kube-scheduler_v1.24.2 (15491584 bytes)
	I0701 22:52:14.792513  160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0701 22:52:14.792539  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0701 22:52:14.792550  160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.24.2': No such file or directory
	I0701 22:52:14.792586  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 --> /var/lib/minikube/images/kube-apiserver_v1.24.2 (33798144 bytes)
	I0701 22:52:14.792601  160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0701 22:52:14.792631  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0701 22:52:14.917586  160696 containerd.go:227] Loading image: /var/lib/minikube/images/pause_3.7
	I0701 22:52:14.917660  160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0701 22:52:16.101432  160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7: (1.183745805s)
	I0701 22:52:16.101474  160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0701 22:52:16.101506  160696 containerd.go:227] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0701 22:52:16.101561  160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0701 22:52:16.602015  160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0701 22:52:16.602056  160696 containerd.go:227] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0701 22:52:16.602104  160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0701 22:52:17.570921  160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 from cache
	I0701 22:52:17.570969  160696 containerd.go:227] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0701 22:52:17.571024  160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0701 22:52:18.259818  160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0701 22:52:18.259867  160696 containerd.go:227] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0701 22:52:18.259912  160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0701 22:52:20.084695  160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.2: (1.824738355s)
	I0701 22:52:20.084729  160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 from cache
	I0701 22:52:20.084761  160696 containerd.go:227] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0701 22:52:20.084795  160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0701 22:52:25.445330  160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.2: (5.360506175s)
	I0701 22:52:25.445364  160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 from cache
	I0701 22:52:25.445390  160696 containerd.go:227] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.2
	I0701 22:52:25.445425  160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.2
	I0701 22:52:26.528902  160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.2: (1.083452525s)
	I0701 22:52:26.528929  160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 from cache
	I0701 22:52:26.528962  160696 containerd.go:227] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0701 22:52:26.528998  160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0701 22:52:30.457001  160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (3.927974233s)
	I0701 22:52:30.457030  160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I0701 22:52:30.457060  160696 cache_images.go:123] Successfully loaded all cached images
	I0701 22:52:30.457066  160696 cache_images.go:92] LoadImages completed in 18.080611233s
	I0701 22:52:30.457117  160696 ssh_runner.go:195] Run: sudo crictl info
	I0701 22:52:30.488778  160696 cni.go:95] Creating CNI manager for ""
	I0701 22:52:30.488811  160696 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 22:52:30.488829  160696 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 22:52:30.488848  160696 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220701225105-10066 NodeName:kubernetes-upgrade-20220701225105-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:
cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 22:52:30.489024  160696 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20220701225105-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 22:52:30.489135  160696 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20220701225105-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220701225105-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 22:52:30.489211  160696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 22:52:30.497442  160696 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 22:52:30.497508  160696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 22:52:30.505952  160696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (563 bytes)
	I0701 22:52:30.520033  160696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 22:52:30.533969  160696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0701 22:52:30.547983  160696 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 22:52:30.551271  160696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 22:52:30.562236  160696 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066 for IP: 192.168.76.2
	I0701 22:52:30.562332  160696 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 22:52:30.562366  160696 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 22:52:30.562455  160696 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/client.key
	I0701 22:52:30.562565  160696 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/apiserver.key.31bdca25
	I0701 22:52:30.562627  160696 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/proxy-client.key
	I0701 22:52:30.562773  160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 22:52:30.562811  160696 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 22:52:30.562829  160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 22:52:30.562864  160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 22:52:30.562897  160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 22:52:30.562930  160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 22:52:30.562977  160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 22:52:30.563731  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 22:52:30.581831  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 22:52:30.600082  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 22:52:30.617194  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 22:52:30.634897  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 22:52:30.656519  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 22:52:30.675784  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 22:52:30.694370  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 22:52:30.713110  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 22:52:30.730872  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 22:52:30.749516  160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 22:52:30.768728  160696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 22:52:30.782590  160696 ssh_runner.go:195] Run: openssl version
	I0701 22:52:30.788044  160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 22:52:30.795818  160696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 22:52:30.798782  160696 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 22:52:30.798829  160696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 22:52:30.803528  160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 22:52:30.810082  160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 22:52:30.817262  160696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 22:52:30.820055  160696 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 22:52:30.820096  160696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 22:52:30.825093  160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 22:52:30.833317  160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 22:52:30.840631  160696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 22:52:30.843783  160696 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 22:52:30.843828  160696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 22:52:30.848889  160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 22:52:30.855610  160696 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220701225105-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220701225105-10066 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:52:30.855704  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 22:52:30.855736  160696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 22:52:30.882763  160696 cri.go:87] found id: ""
	I0701 22:52:30.882831  160696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 22:52:30.891094  160696 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 22:52:30.891122  160696 kubeadm.go:626] restartCluster start
	I0701 22:52:30.891170  160696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 22:52:30.897805  160696 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 22:52:30.898636  160696 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220701225105-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:52:30.899075  160696 kubeconfig.go:127] "kubernetes-upgrade-20220701225105-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 22:52:30.899750  160696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:52:30.900536  160696 kapi.go:59] client config for kubernetes-upgrade-20220701225105-10066: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikub
e/profiles/kubernetes-upgrade-20220701225105-10066/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 22:52:30.900979  160696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 22:52:30.908582  160696 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-07-01 22:51:22.101568183 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-07-01 22:52:30.544362035 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.76.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20220701225105-10066
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.24.2
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0701 22:52:30.908601  160696 kubeadm.go:1092] stopping kube-system containers ...
	I0701 22:52:30.908613  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 22:52:30.908647  160696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 22:52:30.933111  160696 cri.go:87] found id: ""
	I0701 22:52:30.933169  160696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 22:52:30.943408  160696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 22:52:30.950467  160696 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5759 Jul  1 22:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5799 Jul  1 22:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5959 Jul  1 22:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5747 Jul  1 22:51 /etc/kubernetes/scheduler.conf
	
	I0701 22:52:30.952037  160696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 22:52:30.959157  160696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 22:52:30.966197  160696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 22:52:30.972868  160696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 22:52:30.979275  160696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 22:52:30.986195  160696 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 22:52:30.986214  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 22:52:31.037016  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 22:52:32.267951  160696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.230899167s)
	I0701 22:52:32.267987  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 22:52:32.458261  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 22:52:32.513001  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 22:52:32.554058  160696 api_server.go:51] waiting for apiserver process to appear ...
	I0701 22:52:32.554121  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:33.063016  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:33.563243  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:34.062742  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:34.562479  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:35.062887  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:35.563075  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:36.062721  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:36.562658  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:37.062684  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:37.563067  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:38.062437  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:38.562683  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:39.062673  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:39.563181  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:40.063183  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:40.563031  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:41.062924  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:41.562700  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:42.062535  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:42.562461  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:43.062653  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:43.563295  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:44.063289  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:44.563277  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:45.063058  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:45.562428  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:46.062403  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:46.562561  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:47.062691  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:47.562598  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:48.062722  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:48.562683  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:49.063113  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:49.563245  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:50.063327  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:50.562652  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:51.063105  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:51.562659  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:52.063076  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:52.563144  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:53.063450  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:53.562566  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:54.062671  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:54.562642  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:55.062670  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:55.563272  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:56.062638  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:56.562693  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:57.062638  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:57.562669  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:58.063033  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:58.563121  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:59.062667  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:52:59.562558  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:00.063133  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:00.562727  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:01.062817  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:01.563163  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:02.062647  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:02.562665  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:03.063136  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:03.562518  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:04.063122  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:04.563400  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:05.063367  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:05.562403  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:06.063192  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:06.563153  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:07.063404  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:07.563357  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:08.063143  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:08.562678  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:09.063402  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:09.563129  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:10.063343  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:10.562588  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:11.063252  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:11.562663  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:12.062673  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:12.562659  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:13.063289  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:13.562852  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:14.063015  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:14.562637  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:15.062696  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:15.562602  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:16.062825  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:16.563310  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:17.062669  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:17.562689  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:18.062684  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:18.562486  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:19.063167  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:19.563425  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:20.062701  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:20.563489  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:21.063210  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:21.562989  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:22.062697  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:22.562696  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:23.063062  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:23.562700  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:24.062481  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:24.563411  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:25.062692  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:25.562915  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:26.062674  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:26.562725  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:27.062993  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:27.563174  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:28.062426  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:28.562522  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:29.062990  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:29.563289  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:30.062670  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:30.563081  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:31.063188  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:31.563278  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:32.062432  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:32.562471  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:53:32.562572  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:53:32.585597  160696 cri.go:87] found id: ""
	I0701 22:53:32.585622  160696 logs.go:274] 0 containers: []
	W0701 22:53:32.585628  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:53:32.585634  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:53:32.585683  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:53:32.607548  160696 cri.go:87] found id: ""
	I0701 22:53:32.607575  160696 logs.go:274] 0 containers: []
	W0701 22:53:32.607582  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:53:32.607588  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:53:32.607640  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:53:32.629319  160696 cri.go:87] found id: ""
	I0701 22:53:32.629346  160696 logs.go:274] 0 containers: []
	W0701 22:53:32.629354  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:53:32.629361  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:53:32.629413  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:53:32.650769  160696 cri.go:87] found id: ""
	I0701 22:53:32.650794  160696 logs.go:274] 0 containers: []
	W0701 22:53:32.650801  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:53:32.650810  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:53:32.650866  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:53:32.672723  160696 cri.go:87] found id: ""
	I0701 22:53:32.672748  160696 logs.go:274] 0 containers: []
	W0701 22:53:32.672758  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:53:32.672766  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:53:32.672817  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:53:32.695551  160696 cri.go:87] found id: ""
	I0701 22:53:32.695571  160696 logs.go:274] 0 containers: []
	W0701 22:53:32.695580  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:53:32.695590  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:53:32.695639  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:53:32.718224  160696 cri.go:87] found id: ""
	I0701 22:53:32.718249  160696 logs.go:274] 0 containers: []
	W0701 22:53:32.718257  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:53:32.718264  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:53:32.718316  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:53:32.740861  160696 cri.go:87] found id: ""
	I0701 22:53:32.740887  160696 logs.go:274] 0 containers: []
	W0701 22:53:32.740895  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:53:32.740904  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:53:32.740916  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:53:32.786433  160696 logs.go:138] Found kubelet problem: Jul 01 22:53:32 kubernetes-upgrade-20220701225105-10066 kubelet[2334]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:53:32.834141  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:53:32.834180  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:53:32.848164  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:53:32.848190  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:53:32.898660  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:53:32.898682  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:53:32.898694  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:53:32.935746  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:53:32.935776  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:53:32.960887  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:53:32.960912  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:53:32.961021  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:53:32.961035  160696 out.go:239]   Jul 01 22:53:32 kubernetes-upgrade-20220701225105-10066 kubelet[2334]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:53:32 kubernetes-upgrade-20220701225105-10066 kubelet[2334]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:53:32.961039  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:53:32.961044  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:53:42.961493  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:43.063404  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:53:43.063480  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:53:43.090753  160696 cri.go:87] found id: ""
	I0701 22:53:43.090778  160696 logs.go:274] 0 containers: []
	W0701 22:53:43.090788  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:53:43.090796  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:53:43.090848  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:53:43.113480  160696 cri.go:87] found id: ""
	I0701 22:53:43.113508  160696 logs.go:274] 0 containers: []
	W0701 22:53:43.113516  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:53:43.113523  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:53:43.113563  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:53:43.140180  160696 cri.go:87] found id: ""
	I0701 22:53:43.140219  160696 logs.go:274] 0 containers: []
	W0701 22:53:43.140227  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:53:43.140236  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:53:43.140286  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:53:43.168190  160696 cri.go:87] found id: ""
	I0701 22:53:43.168217  160696 logs.go:274] 0 containers: []
	W0701 22:53:43.168226  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:53:43.168235  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:53:43.168283  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:53:43.194136  160696 cri.go:87] found id: ""
	I0701 22:53:43.194160  160696 logs.go:274] 0 containers: []
	W0701 22:53:43.194169  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:53:43.194176  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:53:43.194226  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:53:43.215600  160696 cri.go:87] found id: ""
	I0701 22:53:43.215625  160696 logs.go:274] 0 containers: []
	W0701 22:53:43.215634  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:53:43.215642  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:53:43.215715  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:53:43.242014  160696 cri.go:87] found id: ""
	I0701 22:53:43.242042  160696 logs.go:274] 0 containers: []
	W0701 22:53:43.242051  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:53:43.242072  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:53:43.242127  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:53:43.268163  160696 cri.go:87] found id: ""
	I0701 22:53:43.268188  160696 logs.go:274] 0 containers: []
	W0701 22:53:43.268196  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:53:43.268207  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:53:43.268220  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:53:43.318032  160696 logs.go:138] Found kubelet problem: Jul 01 22:53:42 kubernetes-upgrade-20220701225105-10066 kubelet[2626]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:53:43.382286  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:53:43.382321  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:53:43.397700  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:53:43.397730  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:53:43.453125  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:53:43.453152  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:53:43.453165  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:53:43.499946  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:53:43.499979  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:53:43.525892  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:53:43.525921  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:53:43.526035  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:53:43.526057  160696 out.go:239]   Jul 01 22:53:42 kubernetes-upgrade-20220701225105-10066 kubelet[2626]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:53:42 kubernetes-upgrade-20220701225105-10066 kubelet[2626]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:53:43.526065  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:53:43.526073  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:53:53.527728  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:53:53.563164  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:53:53.563241  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:53:53.585931  160696 cri.go:87] found id: ""
	I0701 22:53:53.585964  160696 logs.go:274] 0 containers: []
	W0701 22:53:53.585972  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:53:53.585981  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:53:53.586045  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:53:53.611388  160696 cri.go:87] found id: ""
	I0701 22:53:53.611414  160696 logs.go:274] 0 containers: []
	W0701 22:53:53.611420  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:53:53.611425  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:53:53.611481  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:53:53.635094  160696 cri.go:87] found id: ""
	I0701 22:53:53.635117  160696 logs.go:274] 0 containers: []
	W0701 22:53:53.635126  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:53:53.635133  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:53:53.635187  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:53:53.656953  160696 cri.go:87] found id: ""
	I0701 22:53:53.656978  160696 logs.go:274] 0 containers: []
	W0701 22:53:53.656987  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:53:53.656994  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:53:53.657041  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:53:53.678488  160696 cri.go:87] found id: ""
	I0701 22:53:53.678510  160696 logs.go:274] 0 containers: []
	W0701 22:53:53.678518  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:53:53.678526  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:53:53.678601  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:53:53.699827  160696 cri.go:87] found id: ""
	I0701 22:53:53.699852  160696 logs.go:274] 0 containers: []
	W0701 22:53:53.699861  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:53:53.699869  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:53:53.699911  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:53:53.721604  160696 cri.go:87] found id: ""
	I0701 22:53:53.721644  160696 logs.go:274] 0 containers: []
	W0701 22:53:53.721654  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:53:53.721664  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:53:53.721716  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:53:53.743391  160696 cri.go:87] found id: ""
	I0701 22:53:53.743409  160696 logs.go:274] 0 containers: []
	W0701 22:53:53.743416  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:53:53.743423  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:53:53.743432  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:53:53.777151  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:53:53.777179  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:53:53.801530  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:53:53.801556  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:53:53.846297  160696 logs.go:138] Found kubelet problem: Jul 01 22:53:53 kubernetes-upgrade-20220701225105-10066 kubelet[2915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:53:53.896001  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:53:53.896031  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:53:53.909709  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:53:53.909732  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:53:53.958673  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:53:53.958699  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:53:53.958711  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:53:53.958839  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:53:53.958853  160696 out.go:239]   Jul 01 22:53:53 kubernetes-upgrade-20220701225105-10066 kubelet[2915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:53:53 kubernetes-upgrade-20220701225105-10066 kubelet[2915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:53:53.958860  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:53:53.958871  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:54:03.960392  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:54:04.062620  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:54:04.062708  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:54:04.095188  160696 cri.go:87] found id: ""
	I0701 22:54:04.095218  160696 logs.go:274] 0 containers: []
	W0701 22:54:04.095228  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:54:04.095236  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:54:04.095289  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:54:04.123432  160696 cri.go:87] found id: ""
	I0701 22:54:04.123460  160696 logs.go:274] 0 containers: []
	W0701 22:54:04.123468  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:54:04.123476  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:54:04.123530  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:54:04.154843  160696 cri.go:87] found id: ""
	I0701 22:54:04.154887  160696 logs.go:274] 0 containers: []
	W0701 22:54:04.154897  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:54:04.154906  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:54:04.154960  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:54:04.181719  160696 cri.go:87] found id: ""
	I0701 22:54:04.181740  160696 logs.go:274] 0 containers: []
	W0701 22:54:04.181745  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:54:04.181751  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:54:04.181793  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:54:04.208646  160696 cri.go:87] found id: ""
	I0701 22:54:04.208671  160696 logs.go:274] 0 containers: []
	W0701 22:54:04.208683  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:54:04.208692  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:54:04.208746  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:54:04.241818  160696 cri.go:87] found id: ""
	I0701 22:54:04.241877  160696 logs.go:274] 0 containers: []
	W0701 22:54:04.241898  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:54:04.241912  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:54:04.241971  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:54:04.270953  160696 cri.go:87] found id: ""
	I0701 22:54:04.270981  160696 logs.go:274] 0 containers: []
	W0701 22:54:04.270989  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:54:04.270996  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:54:04.271054  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:54:04.296294  160696 cri.go:87] found id: ""
	I0701 22:54:04.296319  160696 logs.go:274] 0 containers: []
	W0701 22:54:04.296329  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:54:04.296341  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:54:04.296366  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:54:04.352321  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:54:04.352346  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:54:04.352362  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:54:04.396791  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:54:04.396831  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:54:04.424182  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:54:04.424213  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:54:04.472377  160696 logs.go:138] Found kubelet problem: Jul 01 22:54:03 kubernetes-upgrade-20220701225105-10066 kubelet[3201]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:04.517232  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:54:04.517269  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:54:04.532247  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:04.532278  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:54:04.532401  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:54:04.532418  160696 out.go:239]   Jul 01 22:54:03 kubernetes-upgrade-20220701225105-10066 kubelet[3201]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:54:03 kubernetes-upgrade-20220701225105-10066 kubelet[3201]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:04.532424  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:04.532433  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:54:14.533501  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:54:14.563105  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:54:14.563185  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:54:14.590523  160696 cri.go:87] found id: ""
	I0701 22:54:14.590588  160696 logs.go:274] 0 containers: []
	W0701 22:54:14.590596  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:54:14.590601  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:54:14.590646  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:54:14.613185  160696 cri.go:87] found id: ""
	I0701 22:54:14.613205  160696 logs.go:274] 0 containers: []
	W0701 22:54:14.613213  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:54:14.613218  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:54:14.613256  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:54:14.640142  160696 cri.go:87] found id: ""
	I0701 22:54:14.640168  160696 logs.go:274] 0 containers: []
	W0701 22:54:14.640182  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:54:14.640190  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:54:14.640240  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:54:14.668385  160696 cri.go:87] found id: ""
	I0701 22:54:14.668415  160696 logs.go:274] 0 containers: []
	W0701 22:54:14.668426  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:54:14.668436  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:54:14.668501  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:54:14.692667  160696 cri.go:87] found id: ""
	I0701 22:54:14.692690  160696 logs.go:274] 0 containers: []
	W0701 22:54:14.692699  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:54:14.692708  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:54:14.692764  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:54:14.715534  160696 cri.go:87] found id: ""
	I0701 22:54:14.715566  160696 logs.go:274] 0 containers: []
	W0701 22:54:14.715574  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:54:14.715582  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:54:14.715632  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:54:14.745303  160696 cri.go:87] found id: ""
	I0701 22:54:14.745329  160696 logs.go:274] 0 containers: []
	W0701 22:54:14.745338  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:54:14.745346  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:54:14.745413  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:54:14.775750  160696 cri.go:87] found id: ""
	I0701 22:54:14.775777  160696 logs.go:274] 0 containers: []
	W0701 22:54:14.775785  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:54:14.775797  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:54:14.775811  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:54:14.839644  160696 logs.go:138] Found kubelet problem: Jul 01 22:54:14 kubernetes-upgrade-20220701225105-10066 kubelet[3487]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:14.901382  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:54:14.901413  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:54:14.916871  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:54:14.916912  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:54:14.984706  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:54:14.984736  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:54:14.984747  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:54:15.023712  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:54:15.023772  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:54:15.057095  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:15.057126  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:54:15.057274  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:54:15.057294  160696 out.go:239]   Jul 01 22:54:14 kubernetes-upgrade-20220701225105-10066 kubelet[3487]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:54:14 kubernetes-upgrade-20220701225105-10066 kubelet[3487]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:15.057301  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:15.057313  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:54:25.058476  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:54:25.562417  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:54:25.562478  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:54:25.584954  160696 cri.go:87] found id: ""
	I0701 22:54:25.584980  160696 logs.go:274] 0 containers: []
	W0701 22:54:25.584990  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:54:25.584998  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:54:25.585056  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:54:25.607425  160696 cri.go:87] found id: ""
	I0701 22:54:25.607454  160696 logs.go:274] 0 containers: []
	W0701 22:54:25.607463  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:54:25.607469  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:54:25.607512  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:54:25.641054  160696 cri.go:87] found id: ""
	I0701 22:54:25.641090  160696 logs.go:274] 0 containers: []
	W0701 22:54:25.641115  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:54:25.641126  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:54:25.641188  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:54:25.677105  160696 cri.go:87] found id: ""
	I0701 22:54:25.677134  160696 logs.go:274] 0 containers: []
	W0701 22:54:25.677143  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:54:25.677151  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:54:25.677211  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:54:25.703887  160696 cri.go:87] found id: ""
	I0701 22:54:25.703913  160696 logs.go:274] 0 containers: []
	W0701 22:54:25.703922  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:54:25.703929  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:54:25.703972  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:54:25.733969  160696 cri.go:87] found id: ""
	I0701 22:54:25.733999  160696 logs.go:274] 0 containers: []
	W0701 22:54:25.734010  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:54:25.734019  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:54:25.734079  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:54:25.776644  160696 cri.go:87] found id: ""
	I0701 22:54:25.776668  160696 logs.go:274] 0 containers: []
	W0701 22:54:25.776675  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:54:25.776681  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:54:25.776732  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:54:25.800401  160696 cri.go:87] found id: ""
	I0701 22:54:25.800432  160696 logs.go:274] 0 containers: []
	W0701 22:54:25.800441  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:54:25.800452  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:54:25.800464  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:54:25.867632  160696 logs.go:138] Found kubelet problem: Jul 01 22:54:25 kubernetes-upgrade-20220701225105-10066 kubelet[3842]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:25.914018  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:54:25.914046  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:54:25.934795  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:54:25.934832  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:54:25.993364  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:54:25.993387  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:54:25.993398  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:54:26.036597  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:54:26.036638  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:54:26.071276  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:26.071302  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:54:26.071401  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:54:26.071414  160696 out.go:239]   Jul 01 22:54:25 kubernetes-upgrade-20220701225105-10066 kubelet[3842]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:54:25 kubernetes-upgrade-20220701225105-10066 kubelet[3842]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:26.071423  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:26.071428  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:54:36.072175  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:54:36.563262  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:54:36.563462  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:54:36.594253  160696 cri.go:87] found id: ""
	I0701 22:54:36.594277  160696 logs.go:274] 0 containers: []
	W0701 22:54:36.594283  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:54:36.594289  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:54:36.594329  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:54:36.616366  160696 cri.go:87] found id: ""
	I0701 22:54:36.616388  160696 logs.go:274] 0 containers: []
	W0701 22:54:36.616394  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:54:36.616401  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:54:36.616445  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:54:36.649662  160696 cri.go:87] found id: ""
	I0701 22:54:36.649688  160696 logs.go:274] 0 containers: []
	W0701 22:54:36.649702  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:54:36.649711  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:54:36.649761  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:54:36.679021  160696 cri.go:87] found id: ""
	I0701 22:54:36.679049  160696 logs.go:274] 0 containers: []
	W0701 22:54:36.679058  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:54:36.679066  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:54:36.679120  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:54:36.705720  160696 cri.go:87] found id: ""
	I0701 22:54:36.705750  160696 logs.go:274] 0 containers: []
	W0701 22:54:36.705758  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:54:36.705770  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:54:36.705811  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:54:36.732054  160696 cri.go:87] found id: ""
	I0701 22:54:36.732083  160696 logs.go:274] 0 containers: []
	W0701 22:54:36.732093  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:54:36.732103  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:54:36.732165  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:54:36.761779  160696 cri.go:87] found id: ""
	I0701 22:54:36.761806  160696 logs.go:274] 0 containers: []
	W0701 22:54:36.761815  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:54:36.761825  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:54:36.761876  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:54:36.786589  160696 cri.go:87] found id: ""
	I0701 22:54:36.786611  160696 logs.go:274] 0 containers: []
	W0701 22:54:36.786617  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:54:36.786626  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:54:36.786639  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:54:36.812309  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:54:36.812341  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:54:36.877299  160696 logs.go:138] Found kubelet problem: Jul 01 22:54:36 kubernetes-upgrade-20220701225105-10066 kubelet[4081]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:36.928347  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:54:36.928394  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:54:36.948193  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:54:36.948231  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:54:37.025127  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:54:37.025156  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:54:37.025172  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:54:37.075243  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:37.075275  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:54:37.075415  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:54:37.075433  160696 out.go:239]   Jul 01 22:54:36 kubernetes-upgrade-20220701225105-10066 kubelet[4081]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:54:36 kubernetes-upgrade-20220701225105-10066 kubelet[4081]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:37.075449  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:37.075463  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:54:47.076890  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:54:47.562452  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:54:47.562512  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:54:47.595437  160696 cri.go:87] found id: ""
	I0701 22:54:47.595465  160696 logs.go:274] 0 containers: []
	W0701 22:54:47.595475  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:54:47.595482  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:54:47.595538  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:54:47.621071  160696 cri.go:87] found id: ""
	I0701 22:54:47.621094  160696 logs.go:274] 0 containers: []
	W0701 22:54:47.621102  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:54:47.621109  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:54:47.621152  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:54:47.648248  160696 cri.go:87] found id: ""
	I0701 22:54:47.648269  160696 logs.go:274] 0 containers: []
	W0701 22:54:47.648274  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:54:47.648280  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:54:47.648329  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:54:47.676799  160696 cri.go:87] found id: ""
	I0701 22:54:47.676828  160696 logs.go:274] 0 containers: []
	W0701 22:54:47.676836  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:54:47.676844  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:54:47.676896  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:54:47.700394  160696 cri.go:87] found id: ""
	I0701 22:54:47.700418  160696 logs.go:274] 0 containers: []
	W0701 22:54:47.700426  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:54:47.700434  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:54:47.700486  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:54:47.739622  160696 cri.go:87] found id: ""
	I0701 22:54:47.739654  160696 logs.go:274] 0 containers: []
	W0701 22:54:47.739662  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:54:47.739671  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:54:47.739724  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:54:47.763796  160696 cri.go:87] found id: ""
	I0701 22:54:47.763820  160696 logs.go:274] 0 containers: []
	W0701 22:54:47.763826  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:54:47.763833  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:54:47.763889  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:54:47.786674  160696 cri.go:87] found id: ""
	I0701 22:54:47.786717  160696 logs.go:274] 0 containers: []
	W0701 22:54:47.786726  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:54:47.786736  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:54:47.786746  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:54:47.838033  160696 logs.go:138] Found kubelet problem: Jul 01 22:54:47 kubernetes-upgrade-20220701225105-10066 kubelet[4428]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:47.887699  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:54:47.887726  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:54:47.902663  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:54:47.902698  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:54:47.958610  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:54:47.958638  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:54:47.958651  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:54:48.011007  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:54:48.011038  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:54:48.037572  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:48.037607  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:54:48.037738  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:54:48.037765  160696 out.go:239]   Jul 01 22:54:47 kubernetes-upgrade-20220701225105-10066 kubelet[4428]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:54:47 kubernetes-upgrade-20220701225105-10066 kubelet[4428]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:48.037784  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:48.037793  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:54:58.038971  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:54:58.062879  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:54:58.062967  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:54:58.088606  160696 cri.go:87] found id: ""
	I0701 22:54:58.088638  160696 logs.go:274] 0 containers: []
	W0701 22:54:58.088647  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:54:58.088654  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:54:58.088709  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:54:58.111118  160696 cri.go:87] found id: ""
	I0701 22:54:58.111148  160696 logs.go:274] 0 containers: []
	W0701 22:54:58.111158  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:54:58.111167  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:54:58.111221  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:54:58.133450  160696 cri.go:87] found id: ""
	I0701 22:54:58.133472  160696 logs.go:274] 0 containers: []
	W0701 22:54:58.133478  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:54:58.133491  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:54:58.133545  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:54:58.155591  160696 cri.go:87] found id: ""
	I0701 22:54:58.155612  160696 logs.go:274] 0 containers: []
	W0701 22:54:58.155618  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:54:58.155625  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:54:58.155669  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:54:58.178502  160696 cri.go:87] found id: ""
	I0701 22:54:58.178531  160696 logs.go:274] 0 containers: []
	W0701 22:54:58.178559  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:54:58.178568  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:54:58.178617  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:54:58.202856  160696 cri.go:87] found id: ""
	I0701 22:54:58.202886  160696 logs.go:274] 0 containers: []
	W0701 22:54:58.202894  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:54:58.202902  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:54:58.202956  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:54:58.232058  160696 cri.go:87] found id: ""
	I0701 22:54:58.232084  160696 logs.go:274] 0 containers: []
	W0701 22:54:58.232091  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:54:58.232097  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:54:58.232145  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:54:58.254778  160696 cri.go:87] found id: ""
	I0701 22:54:58.254810  160696 logs.go:274] 0 containers: []
	W0701 22:54:58.254819  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:54:58.254830  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:54:58.254843  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:54:58.289770  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:54:58.289798  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:54:58.315832  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:54:58.315859  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:54:58.366123  160696 logs.go:138] Found kubelet problem: Jul 01 22:54:58 kubernetes-upgrade-20220701225105-10066 kubelet[4741]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:58.410935  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:54:58.410962  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:54:58.424934  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:54:58.424960  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:54:58.473003  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:54:58.473031  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:58.473041  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:54:58.473141  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:54:58.473156  160696 out.go:239]   Jul 01 22:54:58 kubernetes-upgrade-20220701225105-10066 kubelet[4741]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:54:58 kubernetes-upgrade-20220701225105-10066 kubelet[4741]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:54:58.473162  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:54:58.473173  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:55:08.474941  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:55:08.562590  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:55:08.562672  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:55:08.586407  160696 cri.go:87] found id: ""
	I0701 22:55:08.586436  160696 logs.go:274] 0 containers: []
	W0701 22:55:08.586444  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:55:08.586452  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:55:08.586505  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:55:08.608738  160696 cri.go:87] found id: ""
	I0701 22:55:08.608766  160696 logs.go:274] 0 containers: []
	W0701 22:55:08.608774  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:55:08.608782  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:55:08.608821  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:55:08.633416  160696 cri.go:87] found id: ""
	I0701 22:55:08.633436  160696 logs.go:274] 0 containers: []
	W0701 22:55:08.633442  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:55:08.633448  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:55:08.633489  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:55:08.655497  160696 cri.go:87] found id: ""
	I0701 22:55:08.655516  160696 logs.go:274] 0 containers: []
	W0701 22:55:08.655522  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:55:08.655527  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:55:08.655568  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:55:08.679218  160696 cri.go:87] found id: ""
	I0701 22:55:08.679240  160696 logs.go:274] 0 containers: []
	W0701 22:55:08.679249  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:55:08.679256  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:55:08.679304  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:55:08.704513  160696 cri.go:87] found id: ""
	I0701 22:55:08.704535  160696 logs.go:274] 0 containers: []
	W0701 22:55:08.704543  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:55:08.704551  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:55:08.704598  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:55:08.729573  160696 cri.go:87] found id: ""
	I0701 22:55:08.729604  160696 logs.go:274] 0 containers: []
	W0701 22:55:08.729612  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:55:08.729619  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:55:08.729723  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:55:08.753043  160696 cri.go:87] found id: ""
	I0701 22:55:08.753068  160696 logs.go:274] 0 containers: []
	W0701 22:55:08.753074  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:55:08.753082  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:55:08.753091  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:55:08.796915  160696 logs.go:138] Found kubelet problem: Jul 01 22:55:08 kubernetes-upgrade-20220701225105-10066 kubelet[5036]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:08.842381  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:55:08.842408  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:55:08.857534  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:55:08.857567  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:55:08.906371  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:55:08.906394  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:55:08.906406  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:55:08.942796  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:55:08.942824  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:55:08.968098  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:08.968125  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:55:08.968222  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:55:08.968235  160696 out.go:239]   Jul 01 22:55:08 kubernetes-upgrade-20220701225105-10066 kubelet[5036]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:55:08 kubernetes-upgrade-20220701225105-10066 kubelet[5036]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:08.968239  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:08.968245  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:55:18.968798  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:55:19.063356  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:55:19.063428  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:55:19.093011  160696 cri.go:87] found id: ""
	I0701 22:55:19.093035  160696 logs.go:274] 0 containers: []
	W0701 22:55:19.093041  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:55:19.093047  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:55:19.093090  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:55:19.119221  160696 cri.go:87] found id: ""
	I0701 22:55:19.119297  160696 logs.go:274] 0 containers: []
	W0701 22:55:19.119318  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:55:19.119327  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:55:19.119383  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:55:19.143965  160696 cri.go:87] found id: ""
	I0701 22:55:19.143987  160696 logs.go:274] 0 containers: []
	W0701 22:55:19.143994  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:55:19.144001  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:55:19.144051  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:55:19.166639  160696 cri.go:87] found id: ""
	I0701 22:55:19.166668  160696 logs.go:274] 0 containers: []
	W0701 22:55:19.166688  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:55:19.166697  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:55:19.166754  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:55:19.192145  160696 cri.go:87] found id: ""
	I0701 22:55:19.192171  160696 logs.go:274] 0 containers: []
	W0701 22:55:19.192179  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:55:19.192192  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:55:19.192254  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:55:19.218856  160696 cri.go:87] found id: ""
	I0701 22:55:19.218882  160696 logs.go:274] 0 containers: []
	W0701 22:55:19.218891  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:55:19.218898  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:55:19.218948  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:55:19.243276  160696 cri.go:87] found id: ""
	I0701 22:55:19.243296  160696 logs.go:274] 0 containers: []
	W0701 22:55:19.243302  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:55:19.243308  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:55:19.243353  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:55:19.266676  160696 cri.go:87] found id: ""
	I0701 22:55:19.266704  160696 logs.go:274] 0 containers: []
	W0701 22:55:19.266713  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:55:19.266724  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:55:19.266737  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:55:19.318352  160696 logs.go:138] Found kubelet problem: Jul 01 22:55:19 kubernetes-upgrade-20220701225105-10066 kubelet[5330]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:19.363471  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:55:19.363497  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:55:19.377394  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:55:19.377418  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:55:19.428638  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:55:19.428661  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:55:19.428676  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:55:19.465329  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:55:19.465361  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:55:19.490929  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:19.490952  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:55:19.491049  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:55:19.491061  160696 out.go:239]   Jul 01 22:55:19 kubernetes-upgrade-20220701225105-10066 kubelet[5330]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:55:19 kubernetes-upgrade-20220701225105-10066 kubelet[5330]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:19.491068  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:19.491073  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:55:29.491548  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:55:29.562433  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:55:29.562499  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:55:29.586547  160696 cri.go:87] found id: ""
	I0701 22:55:29.586571  160696 logs.go:274] 0 containers: []
	W0701 22:55:29.586580  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:55:29.586587  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:55:29.586636  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:55:29.611066  160696 cri.go:87] found id: ""
	I0701 22:55:29.611100  160696 logs.go:274] 0 containers: []
	W0701 22:55:29.611108  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:55:29.611116  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:55:29.611169  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:55:29.636857  160696 cri.go:87] found id: ""
	I0701 22:55:29.636885  160696 logs.go:274] 0 containers: []
	W0701 22:55:29.636894  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:55:29.636902  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:55:29.636951  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:55:29.676798  160696 cri.go:87] found id: ""
	I0701 22:55:29.676827  160696 logs.go:274] 0 containers: []
	W0701 22:55:29.676835  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:55:29.676843  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:55:29.676895  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:55:29.708940  160696 cri.go:87] found id: ""
	I0701 22:55:29.708971  160696 logs.go:274] 0 containers: []
	W0701 22:55:29.708980  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:55:29.708986  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:55:29.709036  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:55:29.740654  160696 cri.go:87] found id: ""
	I0701 22:55:29.740680  160696 logs.go:274] 0 containers: []
	W0701 22:55:29.740689  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:55:29.740697  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:55:29.740747  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:55:29.769352  160696 cri.go:87] found id: ""
	I0701 22:55:29.769380  160696 logs.go:274] 0 containers: []
	W0701 22:55:29.769390  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:55:29.769397  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:55:29.769446  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:55:29.793398  160696 cri.go:87] found id: ""
	I0701 22:55:29.793423  160696 logs.go:274] 0 containers: []
	W0701 22:55:29.793432  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:55:29.793443  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:55:29.793462  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:55:29.843283  160696 logs.go:138] Found kubelet problem: Jul 01 22:55:29 kubernetes-upgrade-20220701225105-10066 kubelet[5615]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:29.891720  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:55:29.891748  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:55:29.906601  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:55:29.906635  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:55:29.961916  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:55:29.961945  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:55:29.961959  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:55:30.015559  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:55:30.015594  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:55:30.046203  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:30.046234  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:55:30.046364  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:55:30.046384  160696 out.go:239]   Jul 01 22:55:29 kubernetes-upgrade-20220701225105-10066 kubelet[5615]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:55:29 kubernetes-upgrade-20220701225105-10066 kubelet[5615]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:30.046391  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:30.046401  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:55:40.046837  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:55:40.062927  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:55:40.062993  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:55:40.086083  160696 cri.go:87] found id: ""
	I0701 22:55:40.086105  160696 logs.go:274] 0 containers: []
	W0701 22:55:40.086112  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:55:40.086117  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:55:40.086164  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:55:40.111953  160696 cri.go:87] found id: ""
	I0701 22:55:40.111976  160696 logs.go:274] 0 containers: []
	W0701 22:55:40.111982  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:55:40.111988  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:55:40.112031  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:55:40.135730  160696 cri.go:87] found id: ""
	I0701 22:55:40.135752  160696 logs.go:274] 0 containers: []
	W0701 22:55:40.135758  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:55:40.135766  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:55:40.135818  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:55:40.159395  160696 cri.go:87] found id: ""
	I0701 22:55:40.159420  160696 logs.go:274] 0 containers: []
	W0701 22:55:40.159426  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:55:40.159432  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:55:40.159484  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:55:40.183672  160696 cri.go:87] found id: ""
	I0701 22:55:40.183698  160696 logs.go:274] 0 containers: []
	W0701 22:55:40.183707  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:55:40.183714  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:55:40.183763  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:55:40.212650  160696 cri.go:87] found id: ""
	I0701 22:55:40.212677  160696 logs.go:274] 0 containers: []
	W0701 22:55:40.212684  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:55:40.212691  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:55:40.212741  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:55:40.240724  160696 cri.go:87] found id: ""
	I0701 22:55:40.240750  160696 logs.go:274] 0 containers: []
	W0701 22:55:40.240757  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:55:40.240765  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:55:40.240817  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:55:40.263440  160696 cri.go:87] found id: ""
	I0701 22:55:40.263465  160696 logs.go:274] 0 containers: []
	W0701 22:55:40.263473  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:55:40.263483  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:55:40.263495  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:55:40.307528  160696 logs.go:138] Found kubelet problem: Jul 01 22:55:40 kubernetes-upgrade-20220701225105-10066 kubelet[5915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:40.352597  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:55:40.352628  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:55:40.366682  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:55:40.366708  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:55:40.415340  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:55:40.415366  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:55:40.415379  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:55:40.453970  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:55:40.454007  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:55:40.482132  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:40.482214  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:55:40.482359  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:55:40.482374  160696 out.go:239]   Jul 01 22:55:40 kubernetes-upgrade-20220701225105-10066 kubelet[5915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:55:40 kubernetes-upgrade-20220701225105-10066 kubelet[5915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:40.482379  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:40.482385  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:55:50.483245  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:55:50.562586  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:55:50.562662  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:55:50.592410  160696 cri.go:87] found id: ""
	I0701 22:55:50.592432  160696 logs.go:274] 0 containers: []
	W0701 22:55:50.592441  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:55:50.592448  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:55:50.592498  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:55:50.615057  160696 cri.go:87] found id: ""
	I0701 22:55:50.615081  160696 logs.go:274] 0 containers: []
	W0701 22:55:50.615090  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:55:50.615098  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:55:50.615146  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:55:50.648585  160696 cri.go:87] found id: ""
	I0701 22:55:50.648613  160696 logs.go:274] 0 containers: []
	W0701 22:55:50.648621  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:55:50.648630  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:55:50.648679  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:55:50.678338  160696 cri.go:87] found id: ""
	I0701 22:55:50.678365  160696 logs.go:274] 0 containers: []
	W0701 22:55:50.678374  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:55:50.678381  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:55:50.678456  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:55:50.704520  160696 cri.go:87] found id: ""
	I0701 22:55:50.704546  160696 logs.go:274] 0 containers: []
	W0701 22:55:50.704555  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:55:50.704562  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:55:50.704616  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:55:50.748816  160696 cri.go:87] found id: ""
	I0701 22:55:50.748838  160696 logs.go:274] 0 containers: []
	W0701 22:55:50.748846  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:55:50.748853  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:55:50.748902  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:55:50.778493  160696 cri.go:87] found id: ""
	I0701 22:55:50.778522  160696 logs.go:274] 0 containers: []
	W0701 22:55:50.778530  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:55:50.778570  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:55:50.778627  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:55:50.807443  160696 cri.go:87] found id: ""
	I0701 22:55:50.807468  160696 logs.go:274] 0 containers: []
	W0701 22:55:50.807474  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:55:50.807482  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:55:50.807495  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:55:50.878177  160696 logs.go:138] Found kubelet problem: Jul 01 22:55:50 kubernetes-upgrade-20220701225105-10066 kubelet[6197]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:50.941462  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:55:50.941496  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:55:50.960430  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:55:50.960484  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:55:51.028941  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:55:51.028968  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:55:51.028981  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:55:51.088918  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:55:51.088957  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:55:51.129064  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:51.129090  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:55:51.129192  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:55:51.129206  160696 out.go:239]   Jul 01 22:55:50 kubernetes-upgrade-20220701225105-10066 kubelet[6197]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:55:50 kubernetes-upgrade-20220701225105-10066 kubelet[6197]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:55:51.129213  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:55:51.129220  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:56:01.129928  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:56:01.563135  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:56:01.563218  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:56:01.592695  160696 cri.go:87] found id: ""
	I0701 22:56:01.592722  160696 logs.go:274] 0 containers: []
	W0701 22:56:01.592731  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:56:01.592738  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:56:01.592793  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:56:01.619252  160696 cri.go:87] found id: ""
	I0701 22:56:01.619281  160696 logs.go:274] 0 containers: []
	W0701 22:56:01.619292  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:56:01.619300  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:56:01.619352  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:56:01.652542  160696 cri.go:87] found id: ""
	I0701 22:56:01.652571  160696 logs.go:274] 0 containers: []
	W0701 22:56:01.652581  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:56:01.652589  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:56:01.652648  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:56:01.684585  160696 cri.go:87] found id: ""
	I0701 22:56:01.684614  160696 logs.go:274] 0 containers: []
	W0701 22:56:01.684622  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:56:01.684630  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:56:01.684690  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:56:01.715317  160696 cri.go:87] found id: ""
	I0701 22:56:01.715342  160696 logs.go:274] 0 containers: []
	W0701 22:56:01.715349  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:56:01.715357  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:56:01.715403  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:56:01.743629  160696 cri.go:87] found id: ""
	I0701 22:56:01.743650  160696 logs.go:274] 0 containers: []
	W0701 22:56:01.743658  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:56:01.743668  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:56:01.743716  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:56:01.781823  160696 cri.go:87] found id: ""
	I0701 22:56:01.781846  160696 logs.go:274] 0 containers: []
	W0701 22:56:01.781853  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:56:01.781860  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:56:01.781913  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:56:01.811885  160696 cri.go:87] found id: ""
	I0701 22:56:01.811918  160696 logs.go:274] 0 containers: []
	W0701 22:56:01.811928  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:56:01.811941  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:56:01.811957  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:56:01.828794  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:56:01.828824  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:56:01.891766  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:56:01.891794  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:56:01.891809  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:56:01.950065  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:56:01.950100  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:56:01.984620  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:56:01.984655  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:56:02.048126  160696 logs.go:138] Found kubelet problem: Jul 01 22:56:01 kubernetes-upgrade-20220701225105-10066 kubelet[6433]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:56:02.104666  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:56:02.104696  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:56:02.104822  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:56:02.104836  160696 out.go:239]   Jul 01 22:56:01 kubernetes-upgrade-20220701225105-10066 kubelet[6433]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:56:01 kubernetes-upgrade-20220701225105-10066 kubelet[6433]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:56:02.104840  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:56:02.104845  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:56:12.106368  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:56:12.562626  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:56:12.562693  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:56:12.587551  160696 cri.go:87] found id: ""
	I0701 22:56:12.587583  160696 logs.go:274] 0 containers: []
	W0701 22:56:12.587592  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:56:12.587599  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:56:12.587655  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:56:12.614992  160696 cri.go:87] found id: ""
	I0701 22:56:12.615023  160696 logs.go:274] 0 containers: []
	W0701 22:56:12.615033  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:56:12.615041  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:56:12.615091  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:56:12.641795  160696 cri.go:87] found id: ""
	I0701 22:56:12.641828  160696 logs.go:274] 0 containers: []
	W0701 22:56:12.641840  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:56:12.641849  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:56:12.641904  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:56:12.671467  160696 cri.go:87] found id: ""
	I0701 22:56:12.671490  160696 logs.go:274] 0 containers: []
	W0701 22:56:12.671496  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:56:12.671501  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:56:12.671539  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:56:12.695033  160696 cri.go:87] found id: ""
	I0701 22:56:12.695061  160696 logs.go:274] 0 containers: []
	W0701 22:56:12.695069  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:56:12.695076  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:56:12.695127  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:56:12.724522  160696 cri.go:87] found id: ""
	I0701 22:56:12.724550  160696 logs.go:274] 0 containers: []
	W0701 22:56:12.724559  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:56:12.724566  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:56:12.724620  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:56:12.748385  160696 cri.go:87] found id: ""
	I0701 22:56:12.748409  160696 logs.go:274] 0 containers: []
	W0701 22:56:12.748417  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:56:12.748425  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:56:12.748477  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:56:12.770615  160696 cri.go:87] found id: ""
	I0701 22:56:12.770637  160696 logs.go:274] 0 containers: []
	W0701 22:56:12.770643  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:56:12.770652  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:56:12.770665  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:56:12.817319  160696 logs.go:138] Found kubelet problem: Jul 01 22:56:12 kubernetes-upgrade-20220701225105-10066 kubelet[6722]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:56:12.882185  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:56:12.882217  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:56:12.896191  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:56:12.896214  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:56:12.951475  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:56:12.951499  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:56:12.951509  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:56:12.988895  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:56:12.988927  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:56:13.014844  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:56:13.014873  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:56:13.014983  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:56:13.014999  160696 out.go:239]   Jul 01 22:56:12 kubernetes-upgrade-20220701225105-10066 kubelet[6722]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:56:12 kubernetes-upgrade-20220701225105-10066 kubelet[6722]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:56:13.015006  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:56:13.015012  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:56:23.015738  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:56:23.063438  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 22:56:23.063513  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 22:56:23.087031  160696 cri.go:87] found id: ""
	I0701 22:56:23.087053  160696 logs.go:274] 0 containers: []
	W0701 22:56:23.087061  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 22:56:23.087070  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 22:56:23.087113  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 22:56:23.108796  160696 cri.go:87] found id: ""
	I0701 22:56:23.108827  160696 logs.go:274] 0 containers: []
	W0701 22:56:23.108835  160696 logs.go:276] No container was found matching "etcd"
	I0701 22:56:23.108841  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 22:56:23.108881  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 22:56:23.136434  160696 cri.go:87] found id: ""
	I0701 22:56:23.136459  160696 logs.go:274] 0 containers: []
	W0701 22:56:23.136466  160696 logs.go:276] No container was found matching "coredns"
	I0701 22:56:23.136473  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 22:56:23.136521  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 22:56:23.164257  160696 cri.go:87] found id: ""
	I0701 22:56:23.164290  160696 logs.go:274] 0 containers: []
	W0701 22:56:23.164299  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 22:56:23.164306  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 22:56:23.164349  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 22:56:23.188353  160696 cri.go:87] found id: ""
	I0701 22:56:23.188386  160696 logs.go:274] 0 containers: []
	W0701 22:56:23.188394  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 22:56:23.188400  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 22:56:23.188444  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 22:56:23.211193  160696 cri.go:87] found id: ""
	I0701 22:56:23.211218  160696 logs.go:274] 0 containers: []
	W0701 22:56:23.211226  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 22:56:23.211232  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 22:56:23.211283  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 22:56:23.233393  160696 cri.go:87] found id: ""
	I0701 22:56:23.233417  160696 logs.go:274] 0 containers: []
	W0701 22:56:23.233426  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 22:56:23.233432  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 22:56:23.233477  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 22:56:23.256986  160696 cri.go:87] found id: ""
	I0701 22:56:23.257016  160696 logs.go:274] 0 containers: []
	W0701 22:56:23.257024  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 22:56:23.257039  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 22:56:23.257053  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 22:56:23.303988  160696 logs.go:138] Found kubelet problem: Jul 01 22:56:22 kubernetes-upgrade-20220701225105-10066 kubelet[7020]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:56:23.350427  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 22:56:23.350464  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 22:56:23.366689  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 22:56:23.366759  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 22:56:23.414683  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 22:56:23.414716  160696 logs.go:123] Gathering logs for containerd ...
	I0701 22:56:23.414732  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 22:56:23.459290  160696 logs.go:123] Gathering logs for container status ...
	I0701 22:56:23.459330  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 22:56:23.487564  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:56:23.487589  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 22:56:23.487699  160696 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 22:56:23.487714  160696 out.go:239]   Jul 01 22:56:22 kubernetes-upgrade-20220701225105-10066 kubelet[7020]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 01 22:56:22 kubernetes-upgrade-20220701225105-10066 kubelet[7020]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 22:56:23.487719  160696 out.go:309] Setting ErrFile to fd 2...
	I0701 22:56:23.487726  160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:56:33.489062  160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:56:33.497208  160696 kubeadm.go:630] restartCluster took 4m2.606075472s
	W0701 22:56:33.497319  160696 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0701 22:56:33.497343  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 22:56:34.188962  160696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 22:56:34.198405  160696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 22:56:34.205377  160696 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 22:56:34.205428  160696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 22:56:34.212306  160696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 22:56:34.212346  160696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 22:56:34.488887  160696 out.go:204]   - Generating certificates and keys ...
	I0701 22:56:35.222092  160696 out.go:204]   - Booting up control plane ...
	W0701 22:58:30.234697  160696 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:56:34.249651    7560 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:56:34.249651    7560 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0701 22:58:30.234763  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 22:58:31.215309  160696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 22:58:31.225392  160696 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 22:58:31.225450  160696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 22:58:31.233427  160696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 22:58:31.233475  160696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 22:58:31.507505  160696 out.go:204]   - Generating certificates and keys ...
	I0701 22:58:32.633727  160696 out.go:204]   - Booting up control plane ...
	I0701 23:00:27.646987  160696 kubeadm.go:397] StartCluster complete in 7m56.791378401s
	I0701 23:00:27.647038  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 23:00:27.647092  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 23:00:27.670385  160696 cri.go:87] found id: ""
	I0701 23:00:27.670408  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.670416  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 23:00:27.670424  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 23:00:27.670479  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 23:00:27.695513  160696 cri.go:87] found id: ""
	I0701 23:00:27.695537  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.695546  160696 logs.go:276] No container was found matching "etcd"
	I0701 23:00:27.695555  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 23:00:27.695610  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 23:00:27.718045  160696 cri.go:87] found id: ""
	I0701 23:00:27.718072  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.718081  160696 logs.go:276] No container was found matching "coredns"
	I0701 23:00:27.718088  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 23:00:27.718135  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 23:00:27.742214  160696 cri.go:87] found id: ""
	I0701 23:00:27.742241  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.742249  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 23:00:27.742257  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 23:00:27.742312  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 23:00:27.764994  160696 cri.go:87] found id: ""
	I0701 23:00:27.765033  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.765040  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 23:00:27.765047  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 23:00:27.765095  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 23:00:27.787131  160696 cri.go:87] found id: ""
	I0701 23:00:27.787155  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.787161  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 23:00:27.787166  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 23:00:27.787206  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 23:00:27.809474  160696 cri.go:87] found id: ""
	I0701 23:00:27.809497  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.809503  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 23:00:27.809508  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 23:00:27.809552  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 23:00:27.832826  160696 cri.go:87] found id: ""
	I0701 23:00:27.832850  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.832857  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 23:00:27.832867  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 23:00:27.832877  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 23:00:27.883957  160696 logs.go:138] Found kubelet problem: Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 23:00:27.939530  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 23:00:27.939568  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 23:00:27.959413  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 23:00:27.959491  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 23:00:28.015733  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 23:00:28.015759  160696 logs.go:123] Gathering logs for containerd ...
	I0701 23:00:28.015772  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 23:00:28.063280  160696 logs.go:123] Gathering logs for container status ...
	I0701 23:00:28.063306  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0701 23:00:28.089939  160696 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:58:31.270801    9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0701 23:00:28.089978  160696 out.go:239] * 
	* 
	W0701 23:00:28.090236  160696 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:58:31.270801    9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:58:31.270801    9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0701 23:00:28.090268  160696 out.go:239] * 
	* 
	W0701 23:00:28.091045  160696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:00:28.093078  160696 out.go:177] X Problems detected in kubelet:
	I0701 23:00:28.095148  160696 out.go:177]   Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 23:00:28.098679  160696 out.go:177] 
	W0701 23:00:28.100157  160696 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:58:31.270801    9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:58:31.270801    9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0701 23:00:28.100275  160696 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0701 23:00:28.100315  160696 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0701 23:00:28.102745  160696 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220701225105-10066 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220701225105-10066 version --output=json: exit status 1 (50.503185ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "24",
	    "gitVersion": "v1.24.2",
	    "gitCommit": "f66044f4361b9f1f96f0053dd46cb7dce5e990a8",
	    "gitTreeState": "clean",
	    "buildDate": "2022-06-15T14:22:29Z",
	    "goVersion": "go1.18.3",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.4"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-07-01 23:00:28.257535812 +0000 UTC m=+2217.562473326
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220701225105-10066
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220701225105-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60",
	        "Created": "2022-07-01T22:51:17.262095505Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161168,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T22:52:01.941680271Z",
	            "FinishedAt": "2022-07-01T22:51:59.909816228Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60/hostname",
	        "HostsPath": "/var/lib/docker/containers/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60/hosts",
	        "LogPath": "/var/lib/docker/containers/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60-json.log",
	        "Name": "/kubernetes-upgrade-20220701225105-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220701225105-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220701225105-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/088a5282bd64f53bf175ddb9ebf1d64415dace5742a4b38574e1ef7ebf3fb1be-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/088a5282bd64f53bf175ddb9ebf1d64415dace5742a4b38574e1ef7ebf3fb1be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/088a5282bd64f53bf175ddb9ebf1d64415dace5742a4b38574e1ef7ebf3fb1be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/088a5282bd64f53bf175ddb9ebf1d64415dace5742a4b38574e1ef7ebf3fb1be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220701225105-10066",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220701225105-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220701225105-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220701225105-10066",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220701225105-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "04455c6897a67958cb3d112305394e24b2d4bc35b3b66e559f619d57fe81e2e1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49338"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49337"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49334"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49336"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49335"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/04455c6897a6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220701225105-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6bb642abc37b",
	                        "kubernetes-upgrade-20220701225105-10066"
	                    ],
	                    "NetworkID": "3bc5e9344b9b90b1679edbd09c9063fb186936a7f0aaa6c9c5a8168603edf88b",
	                    "EndpointID": "45867237600cc1d7b13018c1669df28c5974e752d70fcc8f2b27bc7c61aa4d8d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220701225105-10066 -n kubernetes-upgrade-20220701225105-10066
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220701225105-10066 -n kubernetes-upgrade-20220701225105-10066: exit status 2 (404.53817ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220701225105-10066 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
	|         | cert-expiration-20220701225121-10066              |          |         |         |                     |                     |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --cert-expiration=8760h                           |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
	|         | cert-expiration-20220701225121-10066              |          |         |         |                     |                     |
	| start   | -p calico-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:56 UTC |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --cni=calico --driver=docker                      |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
	|         | kindnet-20220701225120-10066                      |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| ssh     | -p auto-20220701225119-10066                      | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:56 UTC |
	|         | kindnet-20220701225120-10066                      |          |         |         |                     |                     |
	| delete  | -p auto-20220701225119-10066                      | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC |                     |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --enable-default-cni=true                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| start   | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| ssh     | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| ssh     | -p calico-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p calico-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	| delete  | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:57 UTC |
	| start   | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:58 UTC |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                      |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC |                     |
	|         | old-k8s-version-20220701225700-10066              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 22:57 UTC |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 22:57 UTC |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC |                     |
	|         | no-preload-20220701225718-10066                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:58 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:58 UTC |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC |                     |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 22:59:58
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 22:59:58.270911  235408 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:59:58.271044  235408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:59:58.271055  235408 out.go:309] Setting ErrFile to fd 2...
	I0701 22:59:58.271060  235408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:59:58.271550  235408 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 22:59:58.271787  235408 out.go:303] Setting JSON to false
	I0701 22:59:58.273819  235408 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2551,"bootTime":1656713847,"procs":1296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 22:59:58.273890  235408 start.go:125] virtualization: kvm guest
	I0701 22:59:58.276339  235408 out.go:177] * [embed-certs-20220701225830-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 22:59:58.278020  235408 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 22:59:58.277941  235408 notify.go:193] Checking for updates...
	I0701 22:59:58.279654  235408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 22:59:58.281170  235408 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:59:58.282568  235408 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 22:59:58.284168  235408 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 22:59:58.286596  235408 config.go:178] Loaded profile config "embed-certs-20220701225830-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:59:58.287647  235408 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 22:59:58.329907  235408 docker.go:137] docker version: linux-20.10.17
	I0701 22:59:58.330245  235408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:59:58.438728  235408 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:65 SystemTime:2022-07-01 22:59:58.361863628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:59:58.438835  235408 docker.go:254] overlay module found
	I0701 22:59:58.441052  235408 out.go:177] * Using the docker driver based on existing profile
	I0701 22:59:58.442662  235408 start.go:284] selected driver: docker
	I0701 22:59:58.442683  235408 start.go:808] validating driver "docker" against &{Name:embed-certs-20220701225830-10066 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220701225830-10066 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:59:58.442785  235408 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 22:59:58.443603  235408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:59:58.550264  235408 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:65 SystemTime:2022-07-01 22:59:58.473008189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:59:58.550632  235408 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 22:59:58.550662  235408 cni.go:95] Creating CNI manager for ""
	I0701 22:59:58.550671  235408 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 22:59:58.550681  235408 start_flags.go:310] config:
	{Name:embed-certs-20220701225830-10066 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220701225830-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:59:58.553050  235408 out.go:177] * Starting control plane node embed-certs-20220701225830-10066 in cluster embed-certs-20220701225830-10066
	I0701 22:59:58.554461  235408 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 22:59:58.555785  235408 out.go:177] * Pulling base image ...
	I0701 22:59:58.557082  235408 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 22:59:58.557119  235408 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 22:59:58.557122  235408 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 22:59:58.557220  235408 cache.go:57] Caching tarball of preloaded images
	I0701 22:59:58.557426  235408 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 22:59:58.557449  235408 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 22:59:58.557546  235408 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/config.json ...
	I0701 22:59:58.592438  235408 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 22:59:58.592464  235408 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 22:59:58.592485  235408 cache.go:208] Successfully downloaded all kic artifacts
	I0701 22:59:58.592532  235408 start.go:352] acquiring machines lock for embed-certs-20220701225830-10066: {Name:mk7700ad3a5ae6c33755b1735ad652e63d9ad7e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:59:58.592631  235408 start.go:356] acquired machines lock for "embed-certs-20220701225830-10066" in 75.226µs
	I0701 22:59:58.592654  235408 start.go:94] Skipping create...Using existing machine configuration
	I0701 22:59:58.592662  235408 fix.go:55] fixHost starting: 
	I0701 22:59:58.592902  235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
	I0701 22:59:58.627478  235408 fix.go:103] recreateIfNeeded on embed-certs-20220701225830-10066: state=Stopped err=<nil>
	W0701 22:59:58.627515  235408 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 22:59:58.629575  235408 out.go:177] * Restarting existing docker container for "embed-certs-20220701225830-10066" ...
	I0701 22:59:55.580108  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:58.079689  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:57.050728  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 22:59:59.051359  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 22:59:58.630835  235408 cli_runner.go:164] Run: docker start embed-certs-20220701225830-10066
	I0701 22:59:59.018434  235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
	I0701 22:59:59.055830  235408 kic.go:416] container "embed-certs-20220701225830-10066" state is running.
	I0701 22:59:59.056143  235408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220701225830-10066
	I0701 22:59:59.091907  235408 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/config.json ...
	I0701 22:59:59.092126  235408 machine.go:88] provisioning docker machine ...
	I0701 22:59:59.092152  235408 ubuntu.go:169] provisioning hostname "embed-certs-20220701225830-10066"
	I0701 22:59:59.092194  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 22:59:59.126469  235408 main.go:134] libmachine: Using SSH client type: native
	I0701 22:59:59.126692  235408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0701 22:59:59.126726  235408 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220701225830-10066 && echo "embed-certs-20220701225830-10066" | sudo tee /etc/hostname
	I0701 22:59:59.127378  235408 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34518->127.0.0.1:49412: read: connection reset by peer
	I0701 23:00:02.250726  235408 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220701225830-10066
	
	I0701 23:00:02.250819  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:00:02.285023  235408 main.go:134] libmachine: Using SSH client type: native
	I0701 23:00:02.285162  235408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0701 23:00:02.285182  235408 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220701225830-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220701225830-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220701225830-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:00:02.398118  235408 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:00:02.398172  235408 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:00:02.398207  235408 ubuntu.go:177] setting up certificates
	I0701 23:00:02.398218  235408 provision.go:83] configureAuth start
	I0701 23:00:02.398280  235408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220701225830-10066
	I0701 23:00:02.434428  235408 provision.go:138] copyHostCerts
	I0701 23:00:02.434495  235408 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:00:02.434514  235408 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:00:02.434613  235408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:00:02.434703  235408 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:00:02.434716  235408 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:00:02.434755  235408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:00:02.434825  235408 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:00:02.434835  235408 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:00:02.434867  235408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:00:02.434929  235408 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220701225830-10066 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220701225830-10066]
	I0701 23:00:02.558945  235408 provision.go:172] copyRemoteCerts
	I0701 23:00:02.558992  235408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:00:02.559031  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:00:02.594795  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:00:02.681946  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:00:02.699903  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0701 23:00:02.717597  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 23:00:02.734964  235408 provision.go:86] duration metric: configureAuth took 336.727262ms
	I0701 23:00:02.734990  235408 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:00:02.735182  235408 config.go:178] Loaded profile config "embed-certs-20220701225830-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:00:02.735198  235408 machine.go:91] provisioned docker machine in 3.643056522s
	I0701 23:00:02.735207  235408 start.go:306] post-start starting for "embed-certs-20220701225830-10066" (driver="docker")
	I0701 23:00:02.735214  235408 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:00:02.735263  235408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:00:02.735300  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:00:02.768989  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:00:02.853823  235408 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:00:02.856393  235408 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:00:02.856413  235408 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:00:02.856421  235408 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:00:02.856427  235408 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:00:02.856435  235408 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:00:02.856509  235408 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:00:02.856593  235408 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:00:02.856667  235408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:00:02.863109  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:00:02.879759  235408 start.go:309] post-start completed in 144.541927ms
	I0701 23:00:02.879824  235408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:00:02.879861  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:00:02.912798  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:00:02.994828  235408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:00:02.998675  235408 fix.go:57] fixHost completed within 4.406009666s
	I0701 23:00:02.998715  235408 start.go:81] releasing machines lock for "embed-certs-20220701225830-10066", held for 4.406070232s
	I0701 23:00:02.998811  235408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220701225830-10066
	I0701 23:00:03.033064  235408 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:00:03.033113  235408 ssh_runner.go:195] Run: systemctl --version
	I0701 23:00:03.033138  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:00:03.033145  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:00:03.071007  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:00:03.071411  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:00:03.171479  235408 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:00:03.182465  235408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:00:03.191354  235408 docker.go:179] disabling docker service ...
	I0701 23:00:03.191394  235408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:00:03.200968  235408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:00:03.209599  235408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:00:00.079994  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:02.580583  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:01.550421  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:03.550712  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:03.281130  235408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:00:03.352077  235408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:00:03.360716  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:00:03.372565  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:00:03.380204  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:00:03.388476  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:00:03.395966  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:00:03.403330  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:00:03.410688  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:00:03.423058  235408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:00:03.429283  235408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:00:03.435491  235408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:00:03.505856  235408 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:00:03.577732  235408 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:00:03.577803  235408 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:00:03.581813  235408 start.go:471] Will wait 60s for crictl version
	I0701 23:00:03.581857  235408 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:00:03.606591  235408 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:00:03Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:00:05.079847  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:07.080009  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:05.551452  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:08.050717  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:10.051095  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:09.580529  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:12.079913  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:14.654115  235408 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:00:14.679131  235408 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:00:14.679204  235408 ssh_runner.go:195] Run: containerd --version
	I0701 23:00:14.711721  235408 ssh_runner.go:195] Run: containerd --version
	I0701 23:00:14.745266  235408 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:00:12.550577  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:14.551246  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:14.747069  235408 cli_runner.go:164] Run: docker network inspect embed-certs-20220701225830-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:00:14.779930  235408 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0701 23:00:14.783360  235408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:00:14.792794  235408 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:00:14.792859  235408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:00:14.816892  235408 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:00:14.816918  235408 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:00:14.816965  235408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:00:14.840318  235408 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:00:14.840340  235408 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:00:14.840388  235408 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:00:14.863691  235408 cni.go:95] Creating CNI manager for ""
	I0701 23:00:14.863713  235408 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:00:14.863722  235408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:00:14.863734  235408 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220701225830-10066 NodeName:embed-certs-20220701225830-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:00:14.863881  235408 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220701225830-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:00:14.863974  235408 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220701225830-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220701225830-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 23:00:14.864027  235408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:00:14.870925  235408 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:00:14.870977  235408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:00:14.877458  235408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (525 bytes)
	I0701 23:00:14.889840  235408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:00:14.902235  235408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0701 23:00:14.914307  235408 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:00:14.916993  235408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:00:14.925664  235408 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066 for IP: 192.168.67.2
	I0701 23:00:14.925828  235408 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:00:14.925883  235408 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:00:14.925961  235408 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/client.key
	I0701 23:00:14.926035  235408 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/apiserver.key.c7fa3a9e
	I0701 23:00:14.926082  235408 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/proxy-client.key
	I0701 23:00:14.926207  235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:00:14.926248  235408 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:00:14.926265  235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:00:14.926300  235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:00:14.926332  235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:00:14.926365  235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:00:14.926418  235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:00:14.927102  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:00:14.943560  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0701 23:00:14.959838  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:00:14.976190  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0701 23:00:14.992281  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:00:15.008766  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:00:15.025447  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:00:15.041939  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:00:15.058617  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:00:15.075031  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:00:15.092212  235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:00:15.108891  235408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:00:15.121155  235408 ssh_runner.go:195] Run: openssl version
	I0701 23:00:15.125902  235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:00:15.132941  235408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:00:15.136196  235408 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:00:15.136235  235408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:00:15.141121  235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:00:15.147658  235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:00:15.154815  235408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:00:15.157595  235408 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:00:15.157636  235408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:00:15.162343  235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:00:15.168728  235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:00:15.176282  235408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:15.180512  235408 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:15.180548  235408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:15.185322  235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:00:15.191675  235408 kubeadm.go:395] StartCluster: {Name:embed-certs-20220701225830-10066 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220701225830-10066 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:00:15.191758  235408 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:00:15.191786  235408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:00:15.215440  235408 cri.go:87] found id: "4c8b29014183db39e6e2c6554ad01479a632031579c0dac495b9f3ceaaea1d9b"
	I0701 23:00:15.215466  235408 cri.go:87] found id: "49a774dd0d2f4dd600ad636dedcea2ce9e046364efd3f8e4f66da657abf03b05"
	I0701 23:00:15.215477  235408 cri.go:87] found id: "f35534e686cebfe1b85d62262e785437085a14d2dccd701df9ab3e7ffe6c9853"
	I0701 23:00:15.215487  235408 cri.go:87] found id: "b4ff3be1324b2ff9d0e3c3afb1e3b7cba48800827cfb704564ef12f4bcbdaf7c"
	I0701 23:00:15.215494  235408 cri.go:87] found id: "6d6e8c270009a98182ab6c35e55de13e554cba58cd81593ce561846bba7660aa"
	I0701 23:00:15.215502  235408 cri.go:87] found id: "176b0e6372260ee0ace52d369f37120ba201373efd26256fbb77b72bcbbfebee"
	I0701 23:00:15.215511  235408 cri.go:87] found id: "8c2b6a995c0337ca44537ad290318b99291d36f96367f55ac0b76fc3e31a7a97"
	I0701 23:00:15.215520  235408 cri.go:87] found id: "d8f18479d258f2f42ede3fa10c5307b530b037e94aaab9308182a9572ba396cf"
	I0701 23:00:15.215525  235408 cri.go:87] found id: ""
	I0701 23:00:15.215565  235408 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:00:15.227285  235408 cri.go:114] JSON = null
	W0701 23:00:15.227333  235408 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0701 23:00:15.227379  235408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:00:15.233945  235408 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:00:15.233963  235408 kubeadm.go:626] restartCluster start
	I0701 23:00:15.233999  235408 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:00:15.240484  235408 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:15.241174  235408 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220701225830-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:00:15.241716  235408 kubeconfig.go:127] "embed-certs-20220701225830-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:00:15.242489  235408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:15.243993  235408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:00:15.250252  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:15.250300  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:15.257716  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:15.458122  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:15.458203  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:15.467814  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:15.658250  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:15.658328  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:15.667311  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:15.858625  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:15.858694  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:15.868475  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:16.058762  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:16.058855  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:16.067282  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:16.258588  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:16.258665  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:16.267883  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:16.458192  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:16.458251  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:16.466832  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:16.658110  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:16.658191  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:16.666792  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:16.857872  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:16.857930  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:16.866428  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:17.058705  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:17.058775  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:17.067392  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:17.258667  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:17.258750  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:17.267816  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:17.458072  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:17.458136  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:17.466836  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:17.658111  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:17.658168  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:17.666551  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:17.858833  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:17.858907  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:17.867469  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:18.058731  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:18.058787  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:18.067326  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:18.258823  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:18.258921  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:00:18.267872  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:18.267889  235408 api_server.go:165] Checking apiserver status ...
	I0701 23:00:18.267919  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:00:14.579397  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:16.579551  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:17.050422  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:19.050716  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	W0701 23:00:18.276149  235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:18.276175  235408 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:00:18.276181  235408 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:00:18.276192  235408 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:00:18.276229  235408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:00:18.300183  235408 cri.go:87] found id: "4c8b29014183db39e6e2c6554ad01479a632031579c0dac495b9f3ceaaea1d9b"
	I0701 23:00:18.300207  235408 cri.go:87] found id: "49a774dd0d2f4dd600ad636dedcea2ce9e046364efd3f8e4f66da657abf03b05"
	I0701 23:00:18.300219  235408 cri.go:87] found id: "f35534e686cebfe1b85d62262e785437085a14d2dccd701df9ab3e7ffe6c9853"
	I0701 23:00:18.300227  235408 cri.go:87] found id: "b4ff3be1324b2ff9d0e3c3afb1e3b7cba48800827cfb704564ef12f4bcbdaf7c"
	I0701 23:00:18.300236  235408 cri.go:87] found id: "6d6e8c270009a98182ab6c35e55de13e554cba58cd81593ce561846bba7660aa"
	I0701 23:00:18.300246  235408 cri.go:87] found id: "176b0e6372260ee0ace52d369f37120ba201373efd26256fbb77b72bcbbfebee"
	I0701 23:00:18.300261  235408 cri.go:87] found id: "8c2b6a995c0337ca44537ad290318b99291d36f96367f55ac0b76fc3e31a7a97"
	I0701 23:00:18.300276  235408 cri.go:87] found id: "d8f18479d258f2f42ede3fa10c5307b530b037e94aaab9308182a9572ba396cf"
	I0701 23:00:18.300290  235408 cri.go:87] found id: ""
	I0701 23:00:18.300301  235408 cri.go:232] Stopping containers: [4c8b29014183db39e6e2c6554ad01479a632031579c0dac495b9f3ceaaea1d9b 49a774dd0d2f4dd600ad636dedcea2ce9e046364efd3f8e4f66da657abf03b05 f35534e686cebfe1b85d62262e785437085a14d2dccd701df9ab3e7ffe6c9853 b4ff3be1324b2ff9d0e3c3afb1e3b7cba48800827cfb704564ef12f4bcbdaf7c 6d6e8c270009a98182ab6c35e55de13e554cba58cd81593ce561846bba7660aa 176b0e6372260ee0ace52d369f37120ba201373efd26256fbb77b72bcbbfebee 8c2b6a995c0337ca44537ad290318b99291d36f96367f55ac0b76fc3e31a7a97 d8f18479d258f2f42ede3fa10c5307b530b037e94aaab9308182a9572ba396cf]
	I0701 23:00:18.300363  235408 ssh_runner.go:195] Run: which crictl
	I0701 23:00:18.303065  235408 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 4c8b29014183db39e6e2c6554ad01479a632031579c0dac495b9f3ceaaea1d9b 49a774dd0d2f4dd600ad636dedcea2ce9e046364efd3f8e4f66da657abf03b05 f35534e686cebfe1b85d62262e785437085a14d2dccd701df9ab3e7ffe6c9853 b4ff3be1324b2ff9d0e3c3afb1e3b7cba48800827cfb704564ef12f4bcbdaf7c 6d6e8c270009a98182ab6c35e55de13e554cba58cd81593ce561846bba7660aa 176b0e6372260ee0ace52d369f37120ba201373efd26256fbb77b72bcbbfebee 8c2b6a995c0337ca44537ad290318b99291d36f96367f55ac0b76fc3e31a7a97 d8f18479d258f2f42ede3fa10c5307b530b037e94aaab9308182a9572ba396cf
	I0701 23:00:18.328625  235408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:00:18.338646  235408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:00:18.345594  235408 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 22:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul  1 22:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul  1 22:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul  1 22:58 /etc/kubernetes/scheduler.conf
	
	I0701 23:00:18.345647  235408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 23:00:18.352504  235408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 23:00:18.359262  235408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 23:00:18.365558  235408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:18.365599  235408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:00:18.371746  235408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 23:00:18.378024  235408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:00:18.378069  235408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:00:18.384176  235408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:00:18.390610  235408 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:00:18.390640  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:00:18.434898  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:00:18.994369  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:00:19.180726  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:00:19.241499  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:00:19.337746  235408 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:00:19.337809  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:00:19.848374  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:00:20.348263  235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:00:20.427417  235408 api_server.go:71] duration metric: took 1.089672892s to wait for apiserver process to appear ...
	I0701 23:00:20.427468  235408 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:00:20.427483  235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:00:20.427896  235408 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0701 23:00:20.928117  235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:00:19.080121  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:21.080325  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:23.579390  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:21.051135  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:23.550711  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:24.113066  235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 23:00:24.113104  235408 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 23:00:24.428460  235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:00:24.433753  235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:00:24.433794  235408 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:00:24.928886  235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:00:24.934737  235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:00:24.934757  235408 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:00:25.428106  235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:00:25.433474  235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:00:25.433504  235408 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:00:25.928790  235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:00:25.937308  235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0701 23:00:25.944868  235408 api_server.go:140] control plane version: v1.24.2
	I0701 23:00:25.944889  235408 api_server.go:130] duration metric: took 5.517413571s to wait for apiserver health ...
	I0701 23:00:25.944898  235408 cni.go:95] Creating CNI manager for ""
	I0701 23:00:25.944905  235408 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:00:25.946852  235408 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:00:27.646987  160696 kubeadm.go:397] StartCluster complete in 7m56.791378401s
	I0701 23:00:27.647038  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 23:00:27.647092  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 23:00:27.670385  160696 cri.go:87] found id: ""
	I0701 23:00:27.670408  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.670416  160696 logs.go:276] No container was found matching "kube-apiserver"
	I0701 23:00:27.670424  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 23:00:27.670479  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 23:00:27.695513  160696 cri.go:87] found id: ""
	I0701 23:00:27.695537  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.695546  160696 logs.go:276] No container was found matching "etcd"
	I0701 23:00:27.695555  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 23:00:27.695610  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 23:00:27.718045  160696 cri.go:87] found id: ""
	I0701 23:00:27.718072  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.718081  160696 logs.go:276] No container was found matching "coredns"
	I0701 23:00:27.718088  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 23:00:27.718135  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 23:00:27.742214  160696 cri.go:87] found id: ""
	I0701 23:00:27.742241  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.742249  160696 logs.go:276] No container was found matching "kube-scheduler"
	I0701 23:00:27.742257  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 23:00:27.742312  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 23:00:27.764994  160696 cri.go:87] found id: ""
	I0701 23:00:27.765033  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.765040  160696 logs.go:276] No container was found matching "kube-proxy"
	I0701 23:00:27.765047  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 23:00:27.765095  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 23:00:27.787131  160696 cri.go:87] found id: ""
	I0701 23:00:27.787155  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.787161  160696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0701 23:00:27.787166  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 23:00:27.787206  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 23:00:27.809474  160696 cri.go:87] found id: ""
	I0701 23:00:27.809497  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.809503  160696 logs.go:276] No container was found matching "storage-provisioner"
	I0701 23:00:27.809508  160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 23:00:27.809552  160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 23:00:27.832826  160696 cri.go:87] found id: ""
	I0701 23:00:27.832850  160696 logs.go:274] 0 containers: []
	W0701 23:00:27.832857  160696 logs.go:276] No container was found matching "kube-controller-manager"
	I0701 23:00:27.832867  160696 logs.go:123] Gathering logs for kubelet ...
	I0701 23:00:27.832877  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 23:00:27.883957  160696 logs.go:138] Found kubelet problem: Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 23:00:27.939530  160696 logs.go:123] Gathering logs for dmesg ...
	I0701 23:00:27.939568  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 23:00:27.959413  160696 logs.go:123] Gathering logs for describe nodes ...
	I0701 23:00:27.959491  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0701 23:00:28.015733  160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0701 23:00:28.015759  160696 logs.go:123] Gathering logs for containerd ...
	I0701 23:00:28.015772  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 23:00:28.063280  160696 logs.go:123] Gathering logs for container status ...
	I0701 23:00:28.063306  160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0701 23:00:28.089939  160696 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:58:31.270801    9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0701 23:00:28.089978  160696 out.go:239] * 
	W0701 23:00:28.090236  160696 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:58:31.270801    9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0701 23:00:28.090268  160696 out.go:239] * 
	W0701 23:00:28.091045  160696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:00:28.093078  160696 out.go:177] X Problems detected in kubelet:
	I0701 23:00:28.095148  160696 out.go:177]   Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0701 23:00:28.098679  160696 out.go:177] 
	W0701 23:00:28.100157  160696 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1012-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0701 22:58:31.270801    9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0701 23:00:28.100275  160696 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0701 23:00:28.100315  160696 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0701 23:00:28.102745  160696 out.go:177] 
	I0701 23:00:25.948230  235408 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:00:25.953260  235408 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:00:25.953282  235408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:00:26.018580  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:00:26.773418  235408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:00:26.779956  235408 system_pods.go:59] 9 kube-system pods found
	I0701 23:00:26.779990  235408 system_pods.go:61] "coredns-6d4b75cb6d-vlp9g" [98c71b38-f849-4e40-91c2-ab549594fa28] Running
	I0701 23:00:26.780001  235408 system_pods.go:61] "etcd-embed-certs-20220701225830-10066" [5b2adfb2-7c61-4309-8413-cf8f61b7eff2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:00:26.780011  235408 system_pods.go:61] "kindnet-q2kq6" [849ea186-f716-4f3f-a313-c59e4ab27965] Running
	I0701 23:00:26.780019  235408 system_pods.go:61] "kube-apiserver-embed-certs-20220701225830-10066" [6799dad8-6269-4162-974d-76bbd12c1345] Running
	I0701 23:00:26.780024  235408 system_pods.go:61] "kube-controller-manager-embed-certs-20220701225830-10066" [1961bf23-e285-4fbb-af22-2051d4b05d07] Running
	I0701 23:00:26.780036  235408 system_pods.go:61] "kube-proxy-njxjm" [c3b911f8-f812-4a74-a5ea-7798a0120fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 23:00:26.780043  235408 system_pods.go:61] "kube-scheduler-embed-certs-20220701225830-10066" [383a49e0-3ccd-43e6-a46b-3b314a3facc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0701 23:00:26.780054  235408 system_pods.go:61] "metrics-server-5c6f97fb75-nss5q" [c332f30d-8215-4761-a271-dbfdb476a516] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0701 23:00:26.780060  235408 system_pods.go:61] "storage-provisioner" [71d9493b-f59c-4466-acf5-ffa6c1753183] Running
	I0701 23:00:26.780069  235408 system_pods.go:74] duration metric: took 6.629693ms to wait for pod list to return data ...
	I0701 23:00:26.780077  235408 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:00:26.782515  235408 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:00:26.782572  235408 node_conditions.go:123] node cpu capacity is 8
	I0701 23:00:26.782586  235408 node_conditions.go:105] duration metric: took 2.50322ms to run NodePressure ...
	I0701 23:00:26.782611  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:00:26.906104  235408 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:00:26.909845  235408 kubeadm.go:777] kubelet initialised
	I0701 23:00:26.909869  235408 kubeadm.go:778] duration metric: took 3.711665ms waiting for restarted kubelet to initialise ...
	I0701 23:00:26.909876  235408 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:00:26.915175  235408 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-vlp9g" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:26.919176  235408 pod_ready.go:92] pod "coredns-6d4b75cb6d-vlp9g" in "kube-system" namespace has status "Ready":"True"
	I0701 23:00:26.919194  235408 pod_ready.go:81] duration metric: took 3.994773ms waiting for pod "coredns-6d4b75cb6d-vlp9g" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:26.919204  235408 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 22:52:02 UTC, end at Fri 2022-07-01 23:00:29 UTC. --
	Jul 01 22:58:30 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:30.996387992Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.013449630Z" level=info msg="StopPodSandbox for \"this\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.013516445Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.031328044Z" level=info msg="StopPodSandbox for \"endpoint\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.031394450Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.052124884Z" level=info msg="StopPodSandbox for \"is\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.052184496Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.069414849Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.069468124Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.087742883Z" level=info msg="StopPodSandbox for \"please\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.087810238Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.105697085Z" level=info msg="StopPodSandbox for \"consider\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.105763365Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.123360405Z" level=info msg="StopPodSandbox for \"using\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.123421603Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.140416302Z" level=info msg="StopPodSandbox for \"full\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.140469655Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.157391426Z" level=info msg="StopPodSandbox for \"URL\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.157445545Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.175061524Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.175124766Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.191816770Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.191866680Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.208616519Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.208676568Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.007942] FS-Cache: N-cookie d=00000000de7c5649{9p.inode} n=00000000ed85478f
	[  +0.008742] FS-Cache: N-key=[8] '84a00f0200000000'
	[  +0.440350] FS-Cache: Duplicate cookie detected
	[  +0.004678] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006759] FS-Cache: O-cookie d=00000000de7c5649{9p.inode} n=000000000ba03907
	[  +0.007365] FS-Cache: O-key=[8] '8ea00f0200000000'
	[  +0.004953] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.008025] FS-Cache: N-cookie d=00000000de7c5649{9p.inode} n=00000000dd0fdb1e
	[  +0.008650] FS-Cache: N-key=[8] '8ea00f0200000000'
	[Jul 1 22:31] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul 1 22:51] process 'docker/tmp/qemu-check843609603/check' started with executable stack
	[Jul 1 22:56] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 5a 07 89 70 97 08 06
	[  +9.422376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 ec 04 d9 67 12 08 06
	[  +0.001554] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 e8 f5 ab 62 77 08 06
	[  +4.219906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 34 d0 5a db d2 08 06
	[  +0.000387] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 5a 07 89 70 97 08 06
	[Jul 1 22:57] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 f6 a0 f9 35 79 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 ec 04 d9 67 12 08 06
	
	* 
	* ==> kernel <==
	*  23:00:29 up 43 min,  0 users,  load average: 1.80, 3.38, 2.53
	Linux kubernetes-upgrade-20220701225105-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 22:52:02 UTC, end at Fri 2022-07-01 23:00:29 UTC. --
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --storage-driver-buffer-duration duration                  Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction (default 1m0s) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --storage-driver-db string                                 database name (default "cadvisor") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --storage-driver-host string                               database host:port (default "localhost:8086") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --storage-driver-password string                           database password (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --storage-driver-secure                                    use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --storage-driver-table string                              table name (default "stats") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --storage-driver-user string                               database username (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --streaming-connection-idle-timeout duration               Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: '5m'. Note: All connections to the kubelet server have a maximum duration of 4 hours. (default 4h0m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --sync-frequency duration                                  Max period between synchronizing running containers and config (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --system-cgroups string                                    Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under '/'. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --system-reserved mapStringString                          A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more detail. [default=none] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --system-reserved-cgroup string                            Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via '--system-reserved' flag. Ex. '/system-reserved'. [default=''] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --tls-cert-file string                                     File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --tls-cipher-suites strings                                Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:                 Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:                 Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --tls-min-version string                                   Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --tls-private-key-file string                              File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --topology-manager-policy string                           Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (default "none") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --topology-manager-scope string                            Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container', 'pod'. (default "container") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:   -v, --v Level                                                  number for the log level verbosity
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --version version[=true]                                   Print version information and quit
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --vmodule pattern=N,...                                    comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --volume-plugin-dir string                                 The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]:       --volume-stats-agg-period duration                         Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes.  To disable volume calculations, set to a negative number. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0701 23:00:29.354323  238679 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220701225105-10066 -n kubernetes-upgrade-20220701225105-10066
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220701225105-10066 -n kubernetes-upgrade-20220701225105-10066: exit status 2 (397.214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-20220701225105-10066" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220701225105-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220701225105-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220701225105-10066: (2.196388978s)
--- FAIL: TestKubernetesUpgrade (566.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (284.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220701225718-10066 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0701 22:57:32.034670   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-20220701225718-10066 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: exit status 80 (4m42.290886495s)

                                                
                                                
-- stdout --
	* [no-preload-20220701225718-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node no-preload-20220701225718-10066 in cluster no-preload-20220701225718-10066
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 22:57:18.871997  220277 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:57:18.872172  220277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:57:18.872183  220277 out.go:309] Setting ErrFile to fd 2...
	I0701 22:57:18.872188  220277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:57:18.872579  220277 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 22:57:18.872841  220277 out.go:303] Setting JSON to false
	I0701 22:57:18.874906  220277 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2392,"bootTime":1656713847,"procs":1316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 22:57:18.874971  220277 start.go:125] virtualization: kvm guest
	I0701 22:57:18.877974  220277 out.go:177] * [no-preload-20220701225718-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 22:57:18.879280  220277 notify.go:193] Checking for updates...
	I0701 22:57:18.880630  220277 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 22:57:18.881907  220277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 22:57:18.883293  220277 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:57:18.884735  220277 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 22:57:18.886077  220277 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 22:57:18.887862  220277 config.go:178] Loaded profile config "cilium-20220701225121-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:57:18.887990  220277 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225105-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:57:18.888087  220277 config.go:178] Loaded profile config "old-k8s-version-20220701225700-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 22:57:18.888135  220277 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 22:57:18.934012  220277 docker.go:137] docker version: linux-20.10.17
	I0701 22:57:18.934098  220277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:57:19.060417  220277 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-01 22:57:18.97166196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:57:19.060560  220277 docker.go:254] overlay module found
	I0701 22:57:19.063095  220277 out.go:177] * Using the docker driver based on user configuration
	I0701 22:57:19.064508  220277 start.go:284] selected driver: docker
	I0701 22:57:19.064524  220277 start.go:808] validating driver "docker" against <nil>
	I0701 22:57:19.064549  220277 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 22:57:19.065716  220277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:57:19.183759  220277 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-01 22:57:19.095006562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:57:19.183869  220277 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0701 22:57:19.184046  220277 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 22:57:19.186139  220277 out.go:177] * Using Docker driver with root privileges
	I0701 22:57:19.187678  220277 cni.go:95] Creating CNI manager for ""
	I0701 22:57:19.187704  220277 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 22:57:19.187720  220277 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 22:57:19.187734  220277 start_flags.go:310] config:
	{Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:57:19.189424  220277 out.go:177] * Starting control plane node no-preload-20220701225718-10066 in cluster no-preload-20220701225718-10066
	I0701 22:57:19.190676  220277 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 22:57:19.191875  220277 out.go:177] * Pulling base image ...
	I0701 22:57:19.193114  220277 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 22:57:19.193201  220277 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 22:57:19.193240  220277 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json ...
	I0701 22:57:19.193274  220277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json: {Name:mkb308f9c56e7813d06a10776071755cf14222d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:57:19.193419  220277 cache.go:107] acquiring lock: {Name:mk3aed9edf4e045130f7a3c6fdc7a324a577ec7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.193429  220277 cache.go:107] acquiring lock: {Name:mk9ab11f02b498228e877e934d5aaa541b21cbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.193478  220277 cache.go:107] acquiring lock: {Name:mk7ec70fd71856cc28acc69a0da3b72748a4420a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.193506  220277 cache.go:107] acquiring lock: {Name:mk881497b5d07c75cf2f158738d77e27bd2a369d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.193513  220277 cache.go:107] acquiring lock: {Name:mk72f6f6d64839ffc62747fa568c11250cb4422d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.193515  220277 cache.go:107] acquiring lock: {Name:mk3b0e90d77cbe629b1ed14b104838f8ec036785 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.193471  220277 cache.go:107] acquiring lock: {Name:mk8030c0afbd72b38281e129af86f3686df5df89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.193558  220277 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 exists
	I0701 22:57:19.193576  220277 cache.go:107] acquiring lock: {Name:mk5766c1b843c08c650f7c84836d8506a465b496 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.193580  220277 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2" took 179.345µs
	I0701 22:57:19.193596  220277 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 exists
	I0701 22:57:19.193602  220277 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 succeeded
	I0701 22:57:19.193597  220277 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 22:57:19.193618  220277 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 exists
	I0701 22:57:19.193616  220277 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2" took 170.568µs
	I0701 22:57:19.193628  220277 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 succeeded
	I0701 22:57:19.193624  220277 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 220.157µs
	I0701 22:57:19.193633  220277 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 exists
	I0701 22:57:19.193638  220277 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 22:57:19.193636  220277 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0" took 178.885µs
	I0701 22:57:19.193638  220277 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 exists
	I0701 22:57:19.193652  220277 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 succeeded
	I0701 22:57:19.193628  220277 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0701 22:57:19.193652  220277 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2" took 144.225µs
	I0701 22:57:19.193664  220277 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 exists
	I0701 22:57:19.193668  220277 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 succeeded
	I0701 22:57:19.193664  220277 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2" took 257.621µs
	I0701 22:57:19.193668  220277 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 157.127µs
	I0701 22:57:19.193678  220277 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0701 22:57:19.193676  220277 cache.go:96] cache image "k8s.gcr.io/pause:3.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7" took 103.53µs
	I0701 22:57:19.193675  220277 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 succeeded
	I0701 22:57:19.193685  220277 cache.go:80] save to tar file k8s.gcr.io/pause:3.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 succeeded
	I0701 22:57:19.193691  220277 cache.go:87] Successfully saved all images to host disk.
	I0701 22:57:19.229420  220277 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 22:57:19.229445  220277 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 22:57:19.229463  220277 cache.go:208] Successfully downloaded all kic artifacts
	I0701 22:57:19.229505  220277 start.go:352] acquiring machines lock for no-preload-20220701225718-10066: {Name:mk0df5e406dc07f9b5bbaf453954c11d3f5f2a14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 22:57:19.229638  220277 start.go:356] acquired machines lock for "no-preload-20220701225718-10066" in 110.248µs
	I0701 22:57:19.229665  220277 start.go:91] Provisioning new machine with config: &{Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 22:57:19.229772  220277 start.go:131] createHost starting for "" (driver="docker")
	I0701 22:57:19.232153  220277 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0701 22:57:19.232382  220277 start.go:165] libmachine.API.Create for "no-preload-20220701225718-10066" (driver="docker")
	I0701 22:57:19.232414  220277 client.go:168] LocalClient.Create starting
	I0701 22:57:19.232482  220277 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem
	I0701 22:57:19.232515  220277 main.go:134] libmachine: Decoding PEM data...
	I0701 22:57:19.232545  220277 main.go:134] libmachine: Parsing certificate...
	I0701 22:57:19.232608  220277 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem
	I0701 22:57:19.232635  220277 main.go:134] libmachine: Decoding PEM data...
	I0701 22:57:19.232656  220277 main.go:134] libmachine: Parsing certificate...
	I0701 22:57:19.232994  220277 cli_runner.go:164] Run: docker network inspect no-preload-20220701225718-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0701 22:57:19.266397  220277 cli_runner.go:211] docker network inspect no-preload-20220701225718-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0701 22:57:19.266471  220277 network_create.go:272] running [docker network inspect no-preload-20220701225718-10066] to gather additional debugging logs...
	I0701 22:57:19.266498  220277 cli_runner.go:164] Run: docker network inspect no-preload-20220701225718-10066
	W0701 22:57:19.303957  220277 cli_runner.go:211] docker network inspect no-preload-20220701225718-10066 returned with exit code 1
	I0701 22:57:19.303988  220277 network_create.go:275] error running [docker network inspect no-preload-20220701225718-10066]: docker network inspect no-preload-20220701225718-10066: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220701225718-10066
	I0701 22:57:19.304003  220277 network_create.go:277] output of [docker network inspect no-preload-20220701225718-10066]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220701225718-10066
	
	** /stderr **
	I0701 22:57:19.304052  220277 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 22:57:19.345436  220277 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b090b5bc601e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8d:0d:43:b9}}
	I0701 22:57:19.346316  220277 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-585a063a32f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:48:75:30:46}}
	I0701 22:57:19.347004  220277 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-92ea408e18d1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:39:f9:35:05}}
	I0701 22:57:19.350422  220277 network.go:240] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-3bc5e9344b9b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:5d:e2:9d:a2}}
	I0701 22:57:19.351616  220277 network.go:240] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName:br-c27aabdf32a4 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:53:da:e6:fe}}
	I0701 22:57:19.352467  220277 network.go:288] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc000010858] misses:0}
	I0701 22:57:19.352504  220277 network.go:235] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0701 22:57:19.352515  220277 network_create.go:115] attempt to create docker network no-preload-20220701225718-10066 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0701 22:57:19.352560  220277 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220701225718-10066 no-preload-20220701225718-10066
	I0701 22:57:19.429460  220277 network_create.go:99] docker network no-preload-20220701225718-10066 192.168.94.0/24 created
	I0701 22:57:19.429501  220277 kic.go:106] calculated static IP "192.168.94.2" for the "no-preload-20220701225718-10066" container
	I0701 22:57:19.429579  220277 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0701 22:57:19.474706  220277 cli_runner.go:164] Run: docker volume create no-preload-20220701225718-10066 --label name.minikube.sigs.k8s.io=no-preload-20220701225718-10066 --label created_by.minikube.sigs.k8s.io=true
	I0701 22:57:19.512606  220277 oci.go:103] Successfully created a docker volume no-preload-20220701225718-10066
	I0701 22:57:19.512727  220277 cli_runner.go:164] Run: docker run --rm --name no-preload-20220701225718-10066-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220701225718-10066 --entrypoint /usr/bin/test -v no-preload-20220701225718-10066:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0701 22:57:20.142181  220277 oci.go:107] Successfully prepared a docker volume no-preload-20220701225718-10066
	I0701 22:57:20.142242  220277 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	W0701 22:57:20.142370  220277 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0701 22:57:20.142480  220277 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0701 22:57:20.267301  220277 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20220701225718-10066 --name no-preload-20220701225718-10066 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220701225718-10066 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20220701225718-10066 --network no-preload-20220701225718-10066 --ip 192.168.94.2 --volume no-preload-20220701225718-10066:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0701 22:57:20.672880  220277 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Running}}
	I0701 22:57:20.710745  220277 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 22:57:20.747619  220277 cli_runner.go:164] Run: docker exec no-preload-20220701225718-10066 stat /var/lib/dpkg/alternatives/iptables
	I0701 22:57:20.811762  220277 oci.go:144] the created container "no-preload-20220701225718-10066" has a running status.
	I0701 22:57:20.811798  220277 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa...
	I0701 22:57:20.872431  220277 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0701 22:57:20.971501  220277 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 22:57:21.021537  220277 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0701 22:57:21.021563  220277 kic_runner.go:114] Args: [docker exec --privileged no-preload-20220701225718-10066 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0701 22:57:21.118339  220277 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 22:57:21.157582  220277 machine.go:88] provisioning docker machine ...
	I0701 22:57:21.157627  220277 ubuntu.go:169] provisioning hostname "no-preload-20220701225718-10066"
	I0701 22:57:21.157674  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:57:21.194728  220277 main.go:134] libmachine: Using SSH client type: native
	I0701 22:57:21.194909  220277 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49402 <nil> <nil>}
	I0701 22:57:21.194929  220277 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220701225718-10066 && echo "no-preload-20220701225718-10066" | sudo tee /etc/hostname
	I0701 22:57:21.331166  220277 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220701225718-10066
	
	I0701 22:57:21.331247  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:57:21.370534  220277 main.go:134] libmachine: Using SSH client type: native
	I0701 22:57:21.370698  220277 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49402 <nil> <nil>}
	I0701 22:57:21.370721  220277 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220701225718-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220701225718-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220701225718-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 22:57:21.494151  220277 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 22:57:21.494174  220277 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 22:57:21.494199  220277 ubuntu.go:177] setting up certificates
	I0701 22:57:21.494209  220277 provision.go:83] configureAuth start
	I0701 22:57:21.494266  220277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 22:57:21.528688  220277 provision.go:138] copyHostCerts
	I0701 22:57:21.528760  220277 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 22:57:21.528778  220277 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 22:57:21.528837  220277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 22:57:21.528930  220277 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 22:57:21.528942  220277 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 22:57:21.528973  220277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 22:57:21.529042  220277 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 22:57:21.529052  220277 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 22:57:21.529091  220277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 22:57:21.529150  220277 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220701225718-10066 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220701225718-10066]
	I0701 22:57:21.754780  220277 provision.go:172] copyRemoteCerts
	I0701 22:57:21.754847  220277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 22:57:21.754897  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:57:21.786874  220277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 22:57:21.869373  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 22:57:21.886425  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0701 22:57:21.902474  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 22:57:21.918509  220277 provision.go:86] duration metric: configureAuth took 424.292569ms
	I0701 22:57:21.918527  220277 ubuntu.go:193] setting minikube options for container-runtime
	I0701 22:57:21.918685  220277 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:57:21.918699  220277 machine.go:91] provisioned docker machine in 761.098619ms
	I0701 22:57:21.918704  220277 client.go:171] LocalClient.Create took 2.686281476s
	I0701 22:57:21.918724  220277 start.go:173] duration metric: libmachine.API.Create for "no-preload-20220701225718-10066" took 2.68633864s
	I0701 22:57:21.918734  220277 start.go:306] post-start starting for "no-preload-20220701225718-10066" (driver="docker")
	I0701 22:57:21.918739  220277 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 22:57:21.918792  220277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 22:57:21.918840  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:57:21.951874  220277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 22:57:22.039366  220277 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 22:57:22.041945  220277 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 22:57:22.041971  220277 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 22:57:22.041986  220277 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 22:57:22.041994  220277 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 22:57:22.042010  220277 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 22:57:22.042069  220277 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 22:57:22.042153  220277 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 22:57:22.042249  220277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 22:57:22.048848  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 22:57:22.065638  220277 start.go:309] post-start completed in 146.895808ms
	I0701 22:57:22.065956  220277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 22:57:22.098121  220277 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json ...
	I0701 22:57:22.098342  220277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 22:57:22.098382  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:57:22.130450  220277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 22:57:22.210682  220277 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 22:57:22.214380  220277 start.go:134] duration metric: createHost completed in 2.98459764s
	I0701 22:57:22.214401  220277 start.go:81] releasing machines lock for "no-preload-20220701225718-10066", held for 2.984751223s
	I0701 22:57:22.214477  220277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 22:57:22.247528  220277 ssh_runner.go:195] Run: systemctl --version
	I0701 22:57:22.247592  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:57:22.247601  220277 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 22:57:22.247673  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:57:22.280855  220277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 22:57:22.281462  220277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 22:57:22.388928  220277 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 22:57:22.400652  220277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 22:57:22.410075  220277 docker.go:179] disabling docker service ...
	I0701 22:57:22.410114  220277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 22:57:22.428171  220277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 22:57:22.438162  220277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 22:57:22.538761  220277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 22:57:22.639316  220277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 22:57:22.650894  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 22:57:22.665974  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 22:57:22.673614  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 22:57:22.681039  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 22:57:22.688400  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 22:57:22.696194  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 22:57:22.703541  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 22:57:22.715511  220277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 22:57:22.722012  220277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 22:57:22.729166  220277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 22:57:22.818629  220277 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 22:57:22.897028  220277 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 22:57:22.897094  220277 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 22:57:22.900665  220277 start.go:471] Will wait 60s for crictl version
	I0701 22:57:22.900726  220277 ssh_runner.go:195] Run: sudo crictl version
	I0701 22:57:22.924971  220277 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 22:57:22.925022  220277 ssh_runner.go:195] Run: containerd --version
	I0701 22:57:22.952573  220277 ssh_runner.go:195] Run: containerd --version
	I0701 22:57:22.982138  220277 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 22:57:22.983463  220277 cli_runner.go:164] Run: docker network inspect no-preload-20220701225718-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 22:57:23.015049  220277 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0701 22:57:23.018479  220277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 22:57:23.028619  220277 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 22:57:23.028655  220277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 22:57:23.051010  220277 containerd.go:543] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.2". assuming images are not preloaded.
	I0701 22:57:23.051030  220277 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.2 k8s.gcr.io/kube-controller-manager:v1.24.2 k8s.gcr.io/kube-scheduler:v1.24.2 k8s.gcr.io/kube-proxy:v1.24.2 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0701 22:57:23.051108  220277 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:57:23.051125  220277 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0701 22:57:23.051138  220277 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0701 22:57:23.051154  220277 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0701 22:57:23.051175  220277 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0701 22:57:23.051228  220277 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0701 22:57:23.051129  220277 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0701 22:57:23.051112  220277 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.2
	I0701 22:57:23.052286  220277 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.2: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0701 22:57:23.052310  220277 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0701 22:57:23.052320  220277 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.2: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0701 22:57:23.052286  220277 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0701 22:57:23.052281  220277 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.2: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0701 22:57:23.052291  220277 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.2: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.2
	I0701 22:57:23.052292  220277 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0701 22:57:23.052293  220277 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:57:23.238605  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.2"
	I0701 22:57:23.238758  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.2"
	I0701 22:57:23.239384  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0701 22:57:23.239916  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0701 22:57:23.240690  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.2"
	I0701 22:57:23.241256  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0701 22:57:23.270030  220277 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.2" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.2" does not exist at hash "a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536" in container runtime
	I0701 22:57:23.270122  220277 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.2
	I0701 22:57:23.270185  220277 ssh_runner.go:195] Run: which crictl
	I0701 22:57:23.272439  220277 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0701 22:57:23.272485  220277 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0701 22:57:23.272484  220277 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.2" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.2" does not exist at hash "5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac" in container runtime
	I0701 22:57:23.272517  220277 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0701 22:57:23.272527  220277 ssh_runner.go:195] Run: which crictl
	I0701 22:57:23.272536  220277 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0701 22:57:23.272558  220277 ssh_runner.go:195] Run: which crictl
	I0701 22:57:23.272566  220277 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0701 22:57:23.272564  220277 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.2" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.2" does not exist at hash "34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df" in container runtime
	I0701 22:57:23.272590  220277 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0701 22:57:23.272595  220277 ssh_runner.go:195] Run: which crictl
	I0701 22:57:23.272603  220277 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0701 22:57:23.272615  220277 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0701 22:57:23.272626  220277 ssh_runner.go:195] Run: which crictl
	I0701 22:57:23.272635  220277 ssh_runner.go:195] Run: which crictl
	I0701 22:57:23.275104  220277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.2
	I0701 22:57:23.286075  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.2"
	I0701 22:57:23.319879  220277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.2
	I0701 22:57:23.319907  220277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0701 22:57:23.320029  220277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0701 22:57:23.322658  220277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0701 22:57:23.322680  220277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.2
	I0701 22:57:23.372823  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0701 22:57:23.420264  220277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2
	I0701 22:57:23.420429  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.2
	I0701 22:57:23.420560  220277 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.2" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.2" does not exist at hash "d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503" in container runtime
	I0701 22:57:23.420611  220277 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0701 22:57:23.420647  220277 ssh_runner.go:195] Run: which crictl
	I0701 22:57:23.420762  220277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2
	I0701 22:57:23.420899  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0701 22:57:23.425496  220277 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.24.2': No such file or directory
	I0701 22:57:23.425562  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 --> /var/lib/minikube/images/kube-proxy_v1.24.2 (39518208 bytes)
	I0701 22:57:23.431423  220277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0701 22:57:23.431515  220277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2
	I0701 22:57:23.431593  220277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0701 22:57:23.431552  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0701 22:57:23.431639  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0701 22:57:23.431737  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0701 22:57:23.450395  220277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0701 22:57:23.450492  220277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.2
	I0701 22:57:23.450535  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0701 22:57:23.450396  220277 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0701 22:57:23.450638  220277 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:57:23.450659  220277 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.24.2': No such file or directory
	I0701 22:57:23.450672  220277 ssh_runner.go:195] Run: which crictl
	I0701 22:57:23.450680  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 --> /var/lib/minikube/images/kube-scheduler_v1.24.2 (15491584 bytes)
	I0701 22:57:23.474771  220277 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0701 22:57:23.474805  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0701 22:57:23.474877  220277 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.24.2': No such file or directory
	I0701 22:57:23.474902  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 --> /var/lib/minikube/images/kube-controller-manager_v1.24.2 (31037952 bytes)
	I0701 22:57:23.474951  220277 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0701 22:57:23.474966  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0701 22:57:23.520226  220277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:57:23.535258  220277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2
	I0701 22:57:23.535332  220277 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0701 22:57:23.535352  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0701 22:57:23.535369  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0701 22:57:23.629059  220277 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.24.2': No such file or directory
	I0701 22:57:23.629097  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 --> /var/lib/minikube/images/kube-apiserver_v1.24.2 (33798144 bytes)
	I0701 22:57:23.629059  220277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0701 22:57:23.629196  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0701 22:57:23.631117  220277 containerd.go:227] Loading image: /var/lib/minikube/images/pause_3.7
	I0701 22:57:23.631187  220277 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0701 22:57:23.662321  220277 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0701 22:57:23.662361  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0701 22:57:23.875004  220277 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0701 22:57:23.875057  220277 containerd.go:227] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0701 22:57:23.875109  220277 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0701 22:57:24.998247  220277 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.2: (1.123108114s)
	I0701 22:57:24.998280  220277 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 from cache
	I0701 22:57:24.998313  220277 containerd.go:227] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0701 22:57:24.998358  220277 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0701 22:57:25.681893  220277 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0701 22:57:25.681984  220277 containerd.go:227] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0701 22:57:25.682047  220277 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0701 22:57:26.139202  220277 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0701 22:57:26.139252  220277 containerd.go:227] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.2
	I0701 22:57:26.139294  220277 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.2
	I0701 22:57:27.800448  220277 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.2: (1.661126618s)
	I0701 22:57:27.800486  220277 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 from cache
	I0701 22:57:27.800520  220277 containerd.go:227] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0701 22:57:27.800574  220277 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0701 22:57:29.151166  220277 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.2: (1.35056917s)
	I0701 22:57:29.151189  220277 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 from cache
	I0701 22:57:29.151215  220277 containerd.go:227] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0701 22:57:29.151248  220277 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0701 22:57:30.599229  220277 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.2: (1.447959866s)
	I0701 22:57:30.599258  220277 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 from cache
	I0701 22:57:30.599289  220277 containerd.go:227] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0701 22:57:30.599339  220277 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0701 22:57:34.451926  220277 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (3.852559076s)
	I0701 22:57:34.452014  220277 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I0701 22:57:34.452055  220277 cache_images.go:123] Successfully loaded all cached images
	I0701 22:57:34.452086  220277 cache_images.go:92] LoadImages completed in 11.401028029s
	I0701 22:57:34.452153  220277 ssh_runner.go:195] Run: sudo crictl info
	I0701 22:57:34.482096  220277 cni.go:95] Creating CNI manager for ""
	I0701 22:57:34.482122  220277 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 22:57:34.482142  220277 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 22:57:34.482178  220277 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220701225718-10066 NodeName:no-preload-20220701225718-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 22:57:34.482331  220277 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220701225718-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 22:57:34.482431  220277 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220701225718-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 22:57:34.482489  220277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 22:57:34.490814  220277 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.24.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.24.2': No such file or directory
	
	Initiating transfer...
	I0701 22:57:34.490872  220277 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.24.2
	I0701 22:57:34.497862  220277 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.24.2/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.24.2/bin/linux/amd64/kubectl.sha256
	I0701 22:57:34.497945  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.2/kubectl
	I0701 22:57:34.497865  220277 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.24.2/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.24.2/bin/linux/amd64/kubeadm.sha256
	I0701 22:57:34.497942  220277 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.24.2/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.24.2/bin/linux/amd64/kubelet.sha256
	I0701 22:57:34.498027  220277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 22:57:34.498047  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.2/kubeadm
	I0701 22:57:34.501137  220277 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.24.2/kubectl': No such file or directory
	I0701 22:57:34.501165  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/linux/amd64/v1.24.2/kubectl --> /var/lib/minikube/binaries/v1.24.2/kubectl (45711360 bytes)
	I0701 22:57:34.502771  220277 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.24.2/kubeadm': No such file or directory
	I0701 22:57:34.502801  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/linux/amd64/v1.24.2/kubeadm --> /var/lib/minikube/binaries/v1.24.2/kubeadm (44376064 bytes)
	I0701 22:57:34.509548  220277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.2/kubelet
	I0701 22:57:34.540765  220277 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.24.2/kubelet': No such file or directory
	I0701 22:57:34.540806  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/linux/amd64/v1.24.2/kubelet --> /var/lib/minikube/binaries/v1.24.2/kubelet (116353400 bytes)
	I0701 22:57:34.927075  220277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 22:57:34.934375  220277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0701 22:57:34.948359  220277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 22:57:34.963207  220277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0701 22:57:34.976575  220277 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0701 22:57:34.979624  220277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 22:57:34.988613  220277 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066 for IP: 192.168.94.2
	I0701 22:57:34.988720  220277 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 22:57:34.988779  220277 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 22:57:34.988845  220277 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.key
	I0701 22:57:34.988862  220277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.crt with IP's: []
	I0701 22:57:35.213037  220277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.crt ...
	I0701 22:57:35.213065  220277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.crt: {Name:mk43b7331261bd4a7e4e06b0d10ec67b3f140a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:57:35.213268  220277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.key ...
	I0701 22:57:35.213294  220277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.key: {Name:mkb972901cabbcfd85ecd1df25d8e07582e54bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:57:35.213428  220277 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key.ad8e880a
	I0701 22:57:35.213450  220277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0701 22:57:35.450995  220277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt.ad8e880a ...
	I0701 22:57:35.451034  220277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt.ad8e880a: {Name:mk7ec1b8bd92804d6ce5669bf998f6cb07831eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:57:35.451296  220277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key.ad8e880a ...
	I0701 22:57:35.451319  220277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key.ad8e880a: {Name:mk37a8c1f3d92d6e645c5138a54a1665c886f4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:57:35.451447  220277 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt
	I0701 22:57:35.451515  220277 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key
	I0701 22:57:35.451567  220277 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key
	I0701 22:57:35.451585  220277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.crt with IP's: []
	I0701 22:57:35.548245  220277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.crt ...
	I0701 22:57:35.548278  220277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.crt: {Name:mk21a5f47980fb6270b1c493d22aa904296205ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:57:35.548484  220277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key ...
	I0701 22:57:35.548500  220277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key: {Name:mk5dcf99e8c45980205f6d6fc91a4c9e23ea0495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:57:35.548679  220277 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 22:57:35.548718  220277 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 22:57:35.548733  220277 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 22:57:35.548762  220277 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 22:57:35.548789  220277 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 22:57:35.548815  220277 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 22:57:35.548860  220277 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 22:57:35.549450  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 22:57:35.567747  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 22:57:35.584082  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 22:57:35.602317  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 22:57:35.619463  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 22:57:35.636836  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 22:57:35.653781  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 22:57:35.674882  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 22:57:35.697611  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 22:57:35.714553  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 22:57:35.731121  220277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 22:57:35.748953  220277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 22:57:35.761540  220277 ssh_runner.go:195] Run: openssl version
	I0701 22:57:35.766457  220277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 22:57:35.773377  220277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 22:57:35.776462  220277 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 22:57:35.776503  220277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 22:57:35.781099  220277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 22:57:35.788748  220277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 22:57:35.796070  220277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 22:57:35.799156  220277 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 22:57:35.799202  220277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 22:57:35.803952  220277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 22:57:35.811120  220277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 22:57:35.817887  220277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 22:57:35.820815  220277 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 22:57:35.820857  220277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 22:57:35.825305  220277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 22:57:35.832402  220277 kubeadm.go:395] StartCluster: {Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:57:35.832508  220277 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 22:57:35.832541  220277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 22:57:35.860645  220277 cri.go:87] found id: ""
	I0701 22:57:35.860699  220277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 22:57:35.868106  220277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 22:57:35.875137  220277 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 22:57:35.875184  220277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 22:57:35.881744  220277 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 22:57:35.881780  220277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 22:57:36.177680  220277 out.go:204]   - Generating certificates and keys ...
	I0701 22:57:38.964889  220277 out.go:204]   - Booting up control plane ...
	I0701 22:57:47.015170  220277 out.go:204]   - Configuring RBAC rules ...
	I0701 22:57:47.431031  220277 cni.go:95] Creating CNI manager for ""
	I0701 22:57:47.431061  220277 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 22:57:47.433000  220277 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 22:57:47.434478  220277 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 22:57:47.440030  220277 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 22:57:47.440053  220277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 22:57:47.458444  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 22:57:50.064440  220277 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.605946229s)
	I0701 22:57:50.064504  220277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 22:57:50.064584  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=no-preload-20220701225718-10066 minikube.k8s.io/updated_at=2022_07_01T22_57_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:50.064584  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:50.122615  220277 ops.go:34] apiserver oom_adj: -16
	I0701 22:57:50.316763  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:50.879986  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:51.380042  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:51.879937  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:52.379923  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:52.879925  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:53.380581  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:53.880309  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:54.380144  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:54.880200  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:55.380016  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:55.880510  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:56.380012  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:56.880727  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:57.380374  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:57.880148  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:58.380616  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:58.880591  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:59.380658  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:57:59.880617  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:58:00.380519  220277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 22:58:00.447265  220277 kubeadm.go:1045] duration metric: took 10.382738241s to wait for elevateKubeSystemPrivileges.
	I0701 22:58:00.447294  220277 kubeadm.go:397] StartCluster complete in 24.614899462s
	I0701 22:58:00.447315  220277 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:58:00.447398  220277 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:58:00.449774  220277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:58:00.968534  220277 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220701225718-10066" rescaled to 1
	I0701 22:58:00.968586  220277 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 22:58:00.970719  220277 out.go:177] * Verifying Kubernetes components...
	I0701 22:58:00.968656  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 22:58:00.968677  220277 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0701 22:58:00.968883  220277 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:58:00.972153  220277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 22:58:00.972238  220277 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220701225718-10066"
	I0701 22:58:00.972277  220277 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220701225718-10066"
	W0701 22:58:00.972292  220277 addons.go:162] addon storage-provisioner should already be in state true
	I0701 22:58:00.972335  220277 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 22:58:00.972250  220277 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220701225718-10066"
	I0701 22:58:00.972361  220277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220701225718-10066"
	I0701 22:58:00.972717  220277 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 22:58:00.972864  220277 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 22:58:01.034958  220277 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 22:58:01.036795  220277 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 22:58:01.036818  220277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 22:58:01.036871  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:58:01.043775  220277 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220701225718-10066"
	W0701 22:58:01.043799  220277 addons.go:162] addon default-storageclass should already be in state true
	I0701 22:58:01.043826  220277 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 22:58:01.044311  220277 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 22:58:01.071085  220277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 22:58:01.072542  220277 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 22:58:01.086452  220277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 22:58:01.091376  220277 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 22:58:01.091405  220277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 22:58:01.091461  220277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 22:58:01.152484  220277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 22:58:01.249950  220277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 22:58:01.343002  220277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 22:58:01.553632  220277 start.go:809] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0701 22:58:01.779084  220277 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0701 22:58:01.780445  220277 addons.go:414] enableAddons completed in 811.76609ms
	I0701 22:58:03.079232  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:05.079855  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:07.582635  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:10.079380  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:12.579207  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:14.579468  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:16.579943  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:19.079901  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:21.579366  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:24.079528  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:26.579023  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:28.627126  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:31.079720  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:33.579274  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:35.580436  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:38.079315  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:40.080048  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:42.580034  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:45.079141  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:47.079801  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:49.081099  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:51.579103  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:53.579859  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:55.580020  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:58:57.580500  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:00.079302  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:02.080089  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:04.080307  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:06.080872  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:08.580013  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:10.580131  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:13.079469  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:15.580026  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:17.580614  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:20.079589  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:22.579970  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:25.079942  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:27.579319  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:29.580420  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:32.079259  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:34.079773  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:36.579474  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:38.579525  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:41.079571  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:43.579876  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:46.079944  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:48.579651  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:50.580150  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:53.080597  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:55.580108  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 22:59:58.079689  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:00.079994  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:02.580583  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:05.079847  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:07.080009  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:09.580529  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:12.079913  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:14.579397  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:16.579551  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:19.080121  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:21.080325  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:23.579390  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:25.580079  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:28.079925  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:30.580543  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:33.079979  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:35.579671  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:37.580005  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:40.079579  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:42.579391  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:45.080054  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:47.579388  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:49.579551  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:51.580060  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:54.079920  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:56.080084  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:58.579639  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:00.580194  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:03.080050  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:05.580076  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:08.079894  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:10.579509  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:12.579787  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:15.079684  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:17.080035  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:19.579945  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:22.080141  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:24.579221  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:26.579492  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:29.079699  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:31.079895  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:33.080116  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:35.579892  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:37.580013  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:39.580127  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:42.079757  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:44.578989  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:46.579646  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:49.079763  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:51.079825  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:53.080040  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:55.579848  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:58.080095  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:02:00.579813  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:02:01.082156  220277 node_ready.go:38] duration metric: took 4m0.009577086s waiting for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:02:01.083994  220277 out.go:177] 
	W0701 23:02:01.085364  220277 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:02:01.085381  220277 out.go:239] * 
	* 
	W0701 23:02:01.086064  220277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:02:01.088143  220277 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-20220701225718-10066 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220701225718-10066
helpers_test.go:235: (dbg) docker inspect no-preload-20220701225718-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff",
	        "Created": "2022-07-01T22:57:20.298940328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T22:57:20.663867782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hostname",
	        "HostsPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hosts",
	        "LogPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff-json.log",
	        "Name": "/no-preload-20220701225718-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220701225718-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220701225718-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220701225718-10066",
	                "Source": "/var/lib/docker/volumes/no-preload-20220701225718-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220701225718-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "name.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "865eb475db627936c6b81b6e3b702ce9e018b17349e5ddb5dde9edb749dbced7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49402"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49399"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/865eb475db62",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220701225718-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6714999bf303",
	                        "no-preload-20220701225718-10066"
	                    ],
	                    "NetworkID": "1edec7b6219d6237636ff26267a26187f0ef2e748e4635b07760f0d37cc8596c",
	                    "EndpointID": "0377a99704388e0f2c261b850c52bf87fff4b394cc37a39d49723586e5d2f940",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
E0701 23:02:01.191024   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:01.511738   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220701225718-10066 logs -n 25
E0701 23:02:02.152419   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
	|         | kindnet-20220701225120-10066                      |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| ssh     | -p auto-20220701225119-10066                      | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:56 UTC |
	|         | kindnet-20220701225120-10066                      |          |         |         |                     |                     |
	| delete  | -p auto-20220701225119-10066                      | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC |                     |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --enable-default-cni=true                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| start   | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| ssh     | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| ssh     | -p calico-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p calico-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	| delete  | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:57 UTC |
	| start   | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:58 UTC |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                      |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC |                     |
	|         | old-k8s-version-20220701225700-10066              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 22:57 UTC |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 22:57 UTC |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC |                     |
	|         | no-preload-20220701225718-10066                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:58 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:58 UTC |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC |                     |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | kubernetes-upgrade-20220701225105-10066           |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | disable-driver-mounts-20220701230032-10066        |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:00:32
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:00:32.356530  239469 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:00:32.356741  239469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:00:32.356752  239469 out.go:309] Setting ErrFile to fd 2...
	I0701 23:00:32.356757  239469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:00:32.357259  239469 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:00:32.357532  239469 out.go:303] Setting JSON to false
	I0701 23:00:32.359845  239469 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2585,"bootTime":1656713847,"procs":1325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:00:32.359924  239469 start.go:125] virtualization: kvm guest
	I0701 23:00:32.362432  239469 out.go:177] * [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:00:32.364367  239469 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:00:32.364374  239469 notify.go:193] Checking for updates...
	I0701 23:00:32.365884  239469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:00:32.367371  239469 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:00:32.368928  239469 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:00:32.370375  239469 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:00:32.372140  239469 config.go:178] Loaded profile config "embed-certs-20220701225830-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:00:32.372246  239469 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:00:32.372323  239469 config.go:178] Loaded profile config "old-k8s-version-20220701225700-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 23:00:32.372361  239469 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:00:32.414752  239469 docker.go:137] docker version: linux-20.10.17
	I0701 23:00:32.414847  239469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:00:32.524346  239469 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-01 23:00:32.44675508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:00:32.524469  239469 docker.go:254] overlay module found
	I0701 23:00:32.526463  239469 out.go:177] * Using the docker driver based on user configuration
	I0701 23:00:32.527806  239469 start.go:284] selected driver: docker
	I0701 23:00:32.527824  239469 start.go:808] validating driver "docker" against <nil>
	I0701 23:00:32.527842  239469 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:00:32.529035  239469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:00:32.639135  239469 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-01 23:00:32.562082913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:00:32.639255  239469 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0701 23:00:32.639406  239469 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:00:32.641295  239469 out.go:177] * Using Docker driver with root privileges
	I0701 23:00:32.642763  239469 cni.go:95] Creating CNI manager for ""
	I0701 23:00:32.642792  239469 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:00:32.642813  239469 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 23:00:32.642827  239469 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:00:32.644523  239469 out.go:177] * Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	I0701 23:00:32.645977  239469 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:00:32.647607  239469 out.go:177] * Pulling base image ...
	I0701 23:00:32.649208  239469 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:00:32.649244  239469 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:00:32.649258  239469 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:00:32.649266  239469 cache.go:57] Caching tarball of preloaded images
	I0701 23:00:32.649518  239469 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:00:32.649550  239469 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:00:32.649687  239469 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:00:32.649713  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json: {Name:mke93ae23ec1465a166017f8899d6d9873d4cc00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:32.683956  239469 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:00:32.683989  239469 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:00:32.684005  239469 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:00:32.684043  239469 start.go:352] acquiring machines lock for default-k8s-different-port-20220701230032-10066: {Name:mk7518221e8259d073969ba977a5dbef99fe5935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:00:32.684161  239469 start.go:356] acquired machines lock for "default-k8s-different-port-20220701230032-10066" in 100.96µs
	I0701 23:00:32.684184  239469 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-po
rt-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:00:32.684269  239469 start.go:131] createHost starting for "" (driver="docker")
	I0701 23:00:28.929201  235408 pod_ready.go:102] pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:31.429366  235408 pod_ready.go:102] pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:30.580543  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:33.079979  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:30.550924  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:32.551415  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:35.050743  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:32.686619  239469 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0701 23:00:32.686845  239469 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:00:32.686876  239469 client.go:168] LocalClient.Create starting
	I0701 23:00:32.686948  239469 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem
	I0701 23:00:32.686986  239469 main.go:134] libmachine: Decoding PEM data...
	I0701 23:00:32.687009  239469 main.go:134] libmachine: Parsing certificate...
	I0701 23:00:32.687080  239469 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem
	I0701 23:00:32.687102  239469 main.go:134] libmachine: Decoding PEM data...
	I0701 23:00:32.687116  239469 main.go:134] libmachine: Parsing certificate...
	I0701 23:00:32.687434  239469 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0701 23:00:32.719427  239469 cli_runner.go:211] docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0701 23:00:32.719495  239469 network_create.go:272] running [docker network inspect default-k8s-different-port-20220701230032-10066] to gather additional debugging logs...
	I0701 23:00:32.719519  239469 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066
	W0701 23:00:32.752806  239469 cli_runner.go:211] docker network inspect default-k8s-different-port-20220701230032-10066 returned with exit code 1
	I0701 23:00:32.752848  239469 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220701230032-10066]: docker network inspect default-k8s-different-port-20220701230032-10066: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220701230032-10066
	I0701 23:00:32.752877  239469 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220701230032-10066]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220701230032-10066
	
	** /stderr **
	I0701 23:00:32.752943  239469 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:00:32.786346  239469 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b090b5bc601e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8d:0d:43:b9}}
	I0701 23:00:32.787222  239469 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-585a063a32f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:48:75:30:46}}
	I0701 23:00:32.787935  239469 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-ad316ca52a99 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:19:e0:88:c5}}
	I0701 23:00:32.789206  239469 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0009fa010] misses:0}
	I0701 23:00:32.789264  239469 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0701 23:00:32.789282  239469 network_create.go:115] attempt to create docker network default-k8s-different-port-20220701230032-10066 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0701 23:00:32.789350  239469 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 default-k8s-different-port-20220701230032-10066
	I0701 23:00:32.862393  239469 network_create.go:99] docker network default-k8s-different-port-20220701230032-10066 192.168.76.0/24 created
	I0701 23:00:32.862432  239469 kic.go:106] calculated static IP "192.168.76.2" for the "default-k8s-different-port-20220701230032-10066" container
	I0701 23:00:32.862498  239469 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0701 23:00:32.897872  239469 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220701230032-10066 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 --label created_by.minikube.sigs.k8s.io=true
	I0701 23:00:32.932471  239469 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220701230032-10066
	I0701 23:00:32.932560  239469 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220701230032-10066-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 --entrypoint /usr/bin/test -v default-k8s-different-port-20220701230032-10066:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0701 23:00:33.531943  239469 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220701230032-10066
	I0701 23:00:33.531997  239469 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:00:33.532020  239469 kic.go:179] Starting extracting preloaded images to volume ...
	I0701 23:00:33.532077  239469 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220701230032-10066:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0701 23:00:33.430073  235408 pod_ready.go:102] pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:35.928647  235408 pod_ready.go:102] pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:38.009695  235408 pod_ready.go:102] pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:35.579671  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:37.580005  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:37.050788  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:39.052026  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:40.268594  239469 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220701230032-10066:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (6.73646927s)
	I0701 23:00:40.268627  239469 kic.go:188] duration metric: took 6.736603 seconds to extract preloaded images to volume
	W0701 23:00:40.268766  239469 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0701 23:00:40.268873  239469 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0701 23:00:40.376391  239469 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220701230032-10066 --name default-k8s-different-port-20220701230032-10066 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 --network default-k8s-different-port-20220701230032-10066 --ip 192.168.76.2 --volume default-k8s-different-port-20220701230032-10066:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c
7a6b038ca4e
	I0701 23:00:40.794054  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Running}}
	I0701 23:00:40.832495  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:00:40.867253  239469 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220701230032-10066 stat /var/lib/dpkg/alternatives/iptables
	I0701 23:00:40.961011  239469 oci.go:144] the created container "default-k8s-different-port-20220701230032-10066" has a running status.
	I0701 23:00:40.961049  239469 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa...
	I0701 23:00:41.103356  239469 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0701 23:00:41.198323  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:00:41.239991  239469 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0701 23:00:41.240014  239469 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220701230032-10066 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0701 23:00:41.330485  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:00:41.365577  239469 machine.go:88] provisioning docker machine ...
	I0701 23:00:41.365634  239469 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220701230032-10066"
	I0701 23:00:41.365691  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:41.401454  239469 main.go:134] libmachine: Using SSH client type: native
	I0701 23:00:41.401653  239469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0701 23:00:41.401676  239469 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220701230032-10066 && echo "default-k8s-different-port-20220701230032-10066" | sudo tee /etc/hostname
	I0701 23:00:41.535055  239469 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220701230032-10066
	
	I0701 23:00:41.535123  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:41.569248  239469 main.go:134] libmachine: Using SSH client type: native
	I0701 23:00:41.569386  239469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0701 23:00:41.569409  239469 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220701230032-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220701230032-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220701230032-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:00:41.686222  239469 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:00:41.686257  239469 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:00:41.686279  239469 ubuntu.go:177] setting up certificates
	I0701 23:00:41.686286  239469 provision.go:83] configureAuth start
	I0701 23:00:41.686335  239469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:00:41.719108  239469 provision.go:138] copyHostCerts
	I0701 23:00:41.719173  239469 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:00:41.719187  239469 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:00:41.719254  239469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:00:41.719326  239469 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:00:41.719336  239469 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:00:41.719363  239469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:00:41.719412  239469 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:00:41.719420  239469 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:00:41.719445  239469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:00:41.719487  239469 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220701230032-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220701230032-10066]
	I0701 23:00:41.830942  239469 provision.go:172] copyRemoteCerts
	I0701 23:00:41.830995  239469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:00:41.831027  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:41.864512  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:41.949951  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:00:41.967624  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0701 23:00:41.985011  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 23:00:42.002230  239469 provision.go:86] duration metric: configureAuth took 315.931184ms
	I0701 23:00:42.002253  239469 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:00:42.002393  239469 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:00:42.002406  239469 machine.go:91] provisioned docker machine in 636.794282ms
	I0701 23:00:42.002411  239469 client.go:171] LocalClient.Create took 9.31553155s
	I0701 23:00:42.002430  239469 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220701230032-10066" took 9.315581603s
	I0701 23:00:42.002441  239469 start.go:306] post-start starting for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:00:42.002446  239469 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:00:42.002486  239469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:00:42.002522  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.035732  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:42.121955  239469 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:00:42.124965  239469 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:00:42.124995  239469 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:00:42.125011  239469 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:00:42.125023  239469 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:00:42.125034  239469 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:00:42.125087  239469 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:00:42.125172  239469 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:00:42.125282  239469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:00:42.132257  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:00:42.151720  239469 start.go:309] post-start completed in 149.270154ms
	I0701 23:00:42.152035  239469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.190634  239469 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:00:42.190862  239469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:00:42.190902  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.223968  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:42.310575  239469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:00:42.314378  239469 start.go:134] duration metric: createHost completed in 9.630096746s
	I0701 23:00:42.314401  239469 start.go:81] releasing machines lock for "default-k8s-different-port-20220701230032-10066", held for 9.63022846s
	I0701 23:00:42.314486  239469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.348038  239469 ssh_runner.go:195] Run: systemctl --version
	I0701 23:00:42.348081  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.348103  239469 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:00:42.348167  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.386661  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:42.387446  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:42.471143  239469 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:00:42.492277  239469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:00:42.501455  239469 docker.go:179] disabling docker service ...
	I0701 23:00:42.501510  239469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:00:42.517379  239469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:00:42.526463  239469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:00:42.608061  239469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:00:42.690439  239469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:00:42.699282  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:00:42.711133  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:00:42.718341  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:00:42.725755  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:00:42.733903  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:00:42.741339  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:00:42.748827  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:00:42.760667  239469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:00:42.767095  239469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:00:42.773015  239469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:00:42.851570  239469 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:00:42.924491  239469 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:00:42.924562  239469 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:00:42.928054  239469 start.go:471] Will wait 60s for crictl version
	I0701 23:00:42.928115  239469 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:00:42.954643  239469 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:00:42.954717  239469 ssh_runner.go:195] Run: containerd --version
	I0701 23:00:42.984104  239469 ssh_runner.go:195] Run: containerd --version
	I0701 23:00:43.016309  239469 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:00:38.428388  235408 pod_ready.go:92] pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"True"
	I0701 23:00:38.428416  235408 pod_ready.go:81] duration metric: took 11.509205954s waiting for pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:38.428432  235408 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:38.432484  235408 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"True"
	I0701 23:00:38.432502  235408 pod_ready.go:81] duration metric: took 4.063691ms waiting for pod "kube-apiserver-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:38.432514  235408 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:39.594519  235408 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"True"
	I0701 23:00:39.594583  235408 pod_ready.go:81] duration metric: took 1.16206055s waiting for pod "kube-controller-manager-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:39.594598  235408 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-njxjm" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:39.598784  235408 pod_ready.go:92] pod "kube-proxy-njxjm" in "kube-system" namespace has status "Ready":"True"
	I0701 23:00:39.598803  235408 pod_ready.go:81] duration metric: took 4.197189ms waiting for pod "kube-proxy-njxjm" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:39.598814  235408 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:39.836593  235408 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220701225830-10066" in "kube-system" namespace has status "Ready":"True"
	I0701 23:00:39.836624  235408 pod_ready.go:81] duration metric: took 237.801947ms waiting for pod "kube-scheduler-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:39.836637  235408 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace to be "Ready" ...
	I0701 23:00:42.149237  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:40.079579  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:42.579391  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:41.550934  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:44.050781  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:43.017720  239469 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:00:43.050698  239469 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 23:00:43.053837  239469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:00:43.063525  239469 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:00:43.063575  239469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:00:43.086821  239469 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:00:43.086842  239469 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:00:43.086882  239469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:00:43.110586  239469 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:00:43.110607  239469 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:00:43.110657  239469 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:00:43.133264  239469 cni.go:95] Creating CNI manager for ""
	I0701 23:00:43.133286  239469 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:00:43.133296  239469 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:00:43.133307  239469 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220701230032-10066 NodeName:default-k8s-different-port-20220701230032-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:00:43.133435  239469 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220701230032-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:00:43.133515  239469 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220701230032-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0701 23:00:43.133556  239469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:00:43.141170  239469 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:00:43.141232  239469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:00:43.148417  239469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0701 23:00:43.161136  239469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:00:43.173351  239469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0701 23:00:43.185576  239469 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:00:43.188303  239469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:00:43.197060  239469 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066 for IP: 192.168.76.2
	I0701 23:00:43.197145  239469 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:00:43.197177  239469 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:00:43.197225  239469 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key
	I0701 23:00:43.197241  239469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.crt with IP's: []
	I0701 23:00:43.348543  239469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.crt ...
	I0701 23:00:43.348577  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.crt: {Name:mkc44243c9191651565000054c142b08a17f2e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.348812  239469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key ...
	I0701 23:00:43.348830  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key: {Name:mk281a4b798c59f66934c331b74ffc5b9c596ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.348970  239469 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25
	I0701 23:00:43.348991  239469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0701 23:00:43.491315  239469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt.31bdca25 ...
	I0701 23:00:43.491344  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt.31bdca25: {Name:mk4fb2ce24220cb60283d06e92e48e13cb204171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.491518  239469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25 ...
	I0701 23:00:43.491537  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25: {Name:mkaad5f86ff30d4b24ba91ec365805c937e68259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.491673  239469 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt
	I0701 23:00:43.491739  239469 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key
	I0701 23:00:43.491784  239469 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key
	I0701 23:00:43.491819  239469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt with IP's: []
	I0701 23:00:43.800330  239469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt ...
	I0701 23:00:43.800358  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt: {Name:mk35e707e41436cad5d092ac8ad811e177fe2cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.800558  239469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key ...
	I0701 23:00:43.800575  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key: {Name:mk2c11cc887d5c49c0facae0cc376a37aace8a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.800806  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:00:43.800854  239469 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:00:43.800874  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:00:43.800909  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:00:43.800942  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:00:43.800976  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:00:43.801033  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:00:43.801572  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:00:43.819814  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:00:43.837329  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:00:43.854741  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0701 23:00:43.871447  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:00:43.887796  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:00:43.903812  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:00:43.920103  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:00:43.936429  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:00:43.953084  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:00:43.969547  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:00:43.986383  239469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:00:43.998529  239469 ssh_runner.go:195] Run: openssl version
	I0701 23:00:44.003011  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:00:44.009754  239469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:00:44.012613  239469 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:00:44.012658  239469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:00:44.017062  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:00:44.023919  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:00:44.030894  239469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:00:44.033664  239469 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:00:44.033708  239469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:00:44.038436  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:00:44.046209  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:00:44.053628  239469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:44.056627  239469 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:44.056668  239469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:44.061040  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:00:44.068266  239469 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-1006
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:00:44.068354  239469 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:00:44.068399  239469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:00:44.092472  239469 cri.go:87] found id: ""
	I0701 23:00:44.092532  239469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:00:44.099473  239469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:00:44.106313  239469 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:00:44.106364  239469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:00:44.112690  239469 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:00:44.112726  239469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:00:44.149592  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:46.649816  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:45.080054  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:47.579388  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:46.051072  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:48.051138  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:50.051263  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:48.651816  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:51.149394  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:53.150814  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:49.579551  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:51.580060  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:55.027861  239469 out.go:204]   - Generating certificates and keys ...
	I0701 23:00:55.031084  239469 out.go:204]   - Booting up control plane ...
	I0701 23:00:55.033895  239469 out.go:204]   - Configuring RBAC rules ...
	I0701 23:00:55.036373  239469 cni.go:95] Creating CNI manager for ""
	I0701 23:00:55.036397  239469 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:00:55.038256  239469 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:00:52.052141  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:54.551901  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:55.039627  239469 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:00:55.043753  239469 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:00:55.043774  239469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:00:55.058875  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:00:55.944808  239469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:00:55.944900  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066 minikube.k8s.io/updated_at=2022_07_01T23_00_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:55.944905  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:56.049417  239469 ops.go:34] apiserver oom_adj: -16
	I0701 23:00:56.049498  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:56.627360  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:57.127671  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:55.650576  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:58.149296  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:54.079920  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:56.080084  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:58.579639  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:00:57.050096  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:59.050376  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:00:57.627028  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:58.127761  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:58.627765  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:59.127760  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:59.627280  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:00.127784  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:00.627744  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:01.127743  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:01.627619  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:02.127782  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:00.149619  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:02.149745  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:00.580194  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:03.080050  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:01.050568  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:03.050638  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:05.051434  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:02.627395  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:03.127748  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:03.627304  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:04.127777  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:04.627576  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:05.127612  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:05.627425  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:06.127499  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:06.627771  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:07.127752  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:04.150067  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:06.150203  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:08.151091  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:07.626972  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:08.126852  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:08.186347  239469 kubeadm.go:1045] duration metric: took 12.241501127s to wait for elevateKubeSystemPrivileges.
	I0701 23:01:08.186384  239469 kubeadm.go:397] StartCluster complete in 24.118125112s
	I0701 23:01:08.186406  239469 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:01:08.186506  239469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:01:08.187848  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:01:08.702475  239469 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220701230032-10066" rescaled to 1
	I0701 23:01:08.702522  239469 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:01:08.704547  239469 out.go:177] * Verifying Kubernetes components...
	I0701 23:01:08.702601  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:01:08.702617  239469 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0701 23:01:08.702761  239469 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:01:08.705966  239469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:01:08.706005  239469 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:01:08.706035  239469 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:01:08.706048  239469 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:01:08.706044  239469 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:01:08.706068  239469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220701230032-10066"
	I0701 23:01:08.706093  239469 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:01:08.706457  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:01:08.706603  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:01:08.753029  239469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:01:05.580076  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:08.079894  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:08.754122  239469 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:01:08.754509  239469 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:01:08.754587  239469 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:01:08.754490  239469 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:01:08.754619  239469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:01:08.754669  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:01:08.755065  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:01:08.781972  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:01:08.783498  239469 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:01:08.798287  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:01:08.801197  239469 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:01:08.801219  239469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:01:08.801266  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:01:08.840063  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:01:08.933105  239469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:01:08.974642  239469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:01:09.049107  239469 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0701 23:01:09.334904  239469 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0701 23:01:07.551138  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:10.051041  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:09.336309  239469 addons.go:414] enableAddons completed in 633.692494ms
	I0701 23:01:10.790274  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:10.650518  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:12.650609  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:10.579509  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:12.579787  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:12.550808  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:15.050467  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:12.790423  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:14.790524  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:17.290624  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:15.149862  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:17.650721  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:15.079684  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:17.080035  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:17.051214  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:19.550336  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:19.291275  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:21.792400  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:20.149501  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:22.149664  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:19.579945  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:22.080141  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:21.550429  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:23.550493  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:24.290310  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:26.290429  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:24.650035  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:27.148999  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:24.579221  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:26.579492  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:25.551255  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:28.050875  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:28.290866  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:30.790694  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:29.150030  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:31.651021  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:29.079699  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:31.079895  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:33.080116  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:30.551204  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:32.551278  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:35.050475  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:33.290139  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:35.290712  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:34.149614  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:36.650251  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:35.579892  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:37.580013  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:37.050586  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:39.051100  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:37.790386  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:39.790596  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:42.290903  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:38.650468  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:41.148959  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:43.149552  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:39.580127  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:42.079757  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:41.550818  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:44.050973  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:44.790340  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:46.790403  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:45.649757  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:47.650332  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:44.578989  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:46.579646  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:46.551062  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:49.050938  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:49.291032  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:51.292650  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:50.149827  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:52.649913  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:49.079763  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:51.079825  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:53.080040  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:51.550870  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:54.050245  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:53.790108  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:55.790393  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:55.149306  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:57.649477  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:55.579848  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:58.080095  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:01:56.051187  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:01:58.550942  215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
	I0701 23:02:00.579813  220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:02:01.082156  220277 node_ready.go:38] duration metric: took 4m0.009577086s waiting for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:02:01.083994  220277 out.go:177] 
	W0701 23:02:01.085364  220277 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:02:01.085381  220277 out.go:239] * 
	W0701 23:02:01.086064  220277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:02:01.088143  220277 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	3df1db606ec0f       6fb66cd78abfe       About a minute ago   Running             kindnet-cni               1                   b4daedbbfa0f4
	a277d78bbb6be       6fb66cd78abfe       3 minutes ago        Exited              kindnet-cni               0                   b4daedbbfa0f4
	b6c46b43c578c       a634548d10b03       4 minutes ago        Running             kube-proxy                0                   d3671c6594e46
	ac54680228313       5d725196c1f47       4 minutes ago        Running             kube-scheduler            0                   df504f599edde
	9f4bd4048f717       d3377ffb7177c       4 minutes ago        Running             kube-apiserver            0                   7f2c7d420e188
	6af50f79ce840       34cdf99b1bb3b       4 minutes ago        Running             kube-controller-manager   0                   397a5ee302dea
	b90cae4e4b7ea       aebe758cef4cd       4 minutes ago        Running             etcd                      0                   172c2b390191b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 22:57:21 UTC, end at Fri 2022-07-01 23:02:02 UTC. --
	Jul 01 22:58:00 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:00.848495290Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3671c6594e4689d5478d1e0f4432b616d8d92b19d82a8e7f1eb99caac8544e5 pid=2093 runtime=io.containerd.runc.v2
	Jul 01 22:58:00 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:00.907712175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ck82,Uid:1b54a384-18b1-4c4f-84ab-fe3f8d2c3100,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3671c6594e4689d5478d1e0f4432b616d8d92b19d82a8e7f1eb99caac8544e5\""
	Jul 01 22:58:00 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:00.911458304Z" level=info msg="CreateContainer within sandbox \"d3671c6594e4689d5478d1e0f4432b616d8d92b19d82a8e7f1eb99caac8544e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jul 01 22:58:00 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:00.928252050Z" level=info msg="CreateContainer within sandbox \"d3671c6594e4689d5478d1e0f4432b616d8d92b19d82a8e7f1eb99caac8544e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8\""
	Jul 01 22:58:00 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:00.928951396Z" level=info msg="StartContainer for \"b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8\""
	Jul 01 22:58:01 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:01.034648030Z" level=info msg="StartContainer for \"b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8\" returns successfully"
	Jul 01 22:58:01 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:01.139919259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-b5wkl,Uid:bc770683-78b7-449f-a0af-5a2cc006275c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\""
	Jul 01 22:58:01 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:01.143239442Z" level=info msg="PullImage \"kindest/kindnetd:v20220510-4929dd75\""
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.384522984Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd:v20220510-4929dd75,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.386841832Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.388682873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kindest/kindnetd:v20220510-4929dd75,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.390317765Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.390866830Z" level=info msg="PullImage \"kindest/kindnetd:v20220510-4929dd75\" returns image reference \"sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627\""
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.395453384Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.421744234Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d\""
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.422312918Z" level=info msg="StartContainer for \"a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d\""
	Jul 01 22:58:05 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T22:58:05.621200262Z" level=info msg="StartContainer for \"a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d\" returns successfully"
	Jul 01 23:00:45 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:00:45.977790075Z" level=info msg="shim disconnected" id=a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d
	Jul 01 23:00:45 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:00:45.977866958Z" level=warning msg="cleaning up after shim disconnected" id=a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d namespace=k8s.io
	Jul 01 23:00:45 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:00:45.977881830Z" level=info msg="cleaning up dead shim"
	Jul 01 23:00:45 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:00:45.987235866Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:00:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2459 runtime=io.containerd.runc.v2\n"
	Jul 01 23:00:46 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:00:46.855586580Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jul 01 23:00:46 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:00:46.871461154Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37\""
	Jul 01 23:00:46 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:00:46.871996437Z" level=info msg="StartContainer for \"3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37\""
	Jul 01 23:00:47 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:00:47.122056002Z" level=info msg="StartContainer for \"3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220701225718-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220701225718-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=no-preload-20220701225718-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T22_57_50_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 22:57:44 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220701225718-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:01:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 22:58:17 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 22:58:17 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 22:58:17 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 22:58:17 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220701225718-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                82dabe3f-d133-4afb-a4d2-ee1450b85ce0
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220701225718-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m15s
	  kube-system                 kindnet-b5wkl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-no-preload-20220701225718-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-controller-manager-no-preload-20220701225718-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-proxy-5ck82                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-no-preload-20220701225718-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  Starting                 4m15s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s  kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s  kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s  kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s   node-controller  Node no-preload-20220701225718-10066 event: Registered Node no-preload-20220701225718-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.007942] FS-Cache: N-cookie d=00000000de7c5649{9p.inode} n=00000000ed85478f
	[  +0.008742] FS-Cache: N-key=[8] '84a00f0200000000'
	[  +0.440350] FS-Cache: Duplicate cookie detected
	[  +0.004678] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006759] FS-Cache: O-cookie d=00000000de7c5649{9p.inode} n=000000000ba03907
	[  +0.007365] FS-Cache: O-key=[8] '8ea00f0200000000'
	[  +0.004953] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.008025] FS-Cache: N-cookie d=00000000de7c5649{9p.inode} n=00000000dd0fdb1e
	[  +0.008650] FS-Cache: N-key=[8] '8ea00f0200000000'
	[Jul 1 22:31] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul 1 22:51] process 'docker/tmp/qemu-check843609603/check' started with executable stack
	[Jul 1 22:56] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 5a 07 89 70 97 08 06
	[  +9.422376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 ec 04 d9 67 12 08 06
	[  +0.001554] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 e8 f5 ab 62 77 08 06
	[  +4.219906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 34 d0 5a db d2 08 06
	[  +0.000387] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 5a 07 89 70 97 08 06
	[Jul 1 22:57] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 f6 a0 f9 35 79 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 ec 04 d9 67 12 08 06
	
	* 
	* ==> etcd [b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228] <==
	* {"level":"info","ts":"2022-07-01T22:57:40.720Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-20220701225718-10066 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T22:57:40.722Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T22:57:40.723Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T22:57:40.723Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-07-01T22:57:45.376Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.245133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:discovery\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2022-07-01T22:57:45.376Z","caller":"traceutil/trace.go:171","msg":"trace[98245775] range","detail":"{range_begin:/registry/clusterroles/system:discovery; range_end:; response_count:0; response_revision:80; }","duration":"100.3626ms","start":"2022-07-01T22:57:45.275Z","end":"2022-07-01T22:57:45.376Z","steps":["trace[98245775] 'agreement among raft nodes before linearized reading'  (duration: 96.832114ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:57:49.038Z","caller":"traceutil/trace.go:171","msg":"trace[1927820922] linearizableReadLoop","detail":"{readStateIndex:259; appliedIndex:259; }","duration":"109.87435ms","start":"2022-07-01T22:57:48.928Z","end":"2022-07-01T22:57:49.038Z","steps":["trace[1927820922] 'read index received'  (duration: 109.866721ms)","trace[1927820922] 'applied index is now lower than readState.Index'  (duration: 6.557µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:49.038Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.030645ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:6186"}
	{"level":"info","ts":"2022-07-01T22:57:49.038Z","caller":"traceutil/trace.go:171","msg":"trace[1867140299] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:252; }","duration":"110.085806ms","start":"2022-07-01T22:57:48.928Z","end":"2022-07-01T22:57:49.038Z","steps":["trace[1867140299] 'agreement among raft nodes before linearized reading'  (duration: 109.986775ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:57:49.448Z","caller":"traceutil/trace.go:171","msg":"trace[2075058343] linearizableReadLoop","detail":"{readStateIndex:261; appliedIndex:261; }","duration":"120.342992ms","start":"2022-07-01T22:57:49.328Z","end":"2022-07-01T22:57:49.448Z","steps":["trace[2075058343] 'read index received'  (duration: 120.337394ms)","trace[2075058343] 'applied index is now lower than readState.Index'  (duration: 4.619µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:49.448Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.51147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:4098"}
	{"level":"info","ts":"2022-07-01T22:57:49.448Z","caller":"traceutil/trace.go:171","msg":"trace[1223968225] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:252; }","duration":"120.565364ms","start":"2022-07-01T22:57:49.328Z","end":"2022-07-01T22:57:49.448Z","steps":["trace[1223968225] 'agreement among raft nodes before linearized reading'  (duration: 120.458386ms)"],"step_count":1}
	{"level":"warn","ts":"2022-07-01T22:57:50.278Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"155.261704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:3094"}
	{"level":"info","ts":"2022-07-01T22:57:50.278Z","caller":"traceutil/trace.go:171","msg":"trace[2055409456] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:258; }","duration":"155.385267ms","start":"2022-07-01T22:57:50.122Z","end":"2022-07-01T22:57:50.278Z","steps":["trace[2055409456] 'agreement among raft nodes before linearized reading'  (duration: 70.598719ms)","trace[2055409456] 'range keys from in-memory index tree'  (duration: 84.618935ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T22:57:50.278Z","caller":"traceutil/trace.go:171","msg":"trace[2103903859] transaction","detail":"{read_only:false; response_revision:259; number_of_response:1; }","duration":"149.94207ms","start":"2022-07-01T22:57:50.128Z","end":"2022-07-01T22:57:50.278Z","steps":["trace[2103903859] 'process raft request'  (duration: 65.293492ms)","trace[2103903859] 'compare'  (duration: 84.545015ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T22:57:51.109Z","caller":"traceutil/trace.go:171","msg":"trace[781945869] linearizableReadLoop","detail":"{readStateIndex:272; appliedIndex:272; }","duration":"174.267022ms","start":"2022-07-01T22:57:50.935Z","end":"2022-07-01T22:57:51.109Z","steps":["trace[781945869] 'read index received'  (duration: 174.257127ms)","trace[781945869] 'applied index is now lower than readState.Index'  (duration: 8.133µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:51.175Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"240.517113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-07-01T22:57:51.175Z","caller":"traceutil/trace.go:171","msg":"trace[1713793044] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:262; }","duration":"240.608471ms","start":"2022-07-01T22:57:50.935Z","end":"2022-07-01T22:57:51.175Z","steps":["trace[1713793044] 'agreement among raft nodes before linearized reading'  (duration: 174.376947ms)","trace[1713793044] 'range keys from in-memory index tree'  (duration: 66.10988ms)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:52.428Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.210117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:4359"}
	{"level":"info","ts":"2022-07-01T22:57:52.428Z","caller":"traceutil/trace.go:171","msg":"trace[322197520] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:265; }","duration":"103.318537ms","start":"2022-07-01T22:57:52.325Z","end":"2022-07-01T22:57:52.428Z","steps":["trace[322197520] 'range keys from in-memory index tree'  (duration: 103.086305ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:58:35.992Z","caller":"traceutil/trace.go:171","msg":"trace[372511059] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"131.829529ms","start":"2022-07-01T22:58:35.860Z","end":"2022-07-01T22:58:35.992Z","steps":["trace[372511059] 'process raft request'  (duration: 34.207641ms)","trace[372511059] 'compare'  (duration: 97.515253ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  23:02:02 up 44 min,  0 users,  load average: 1.13, 2.84, 2.42
	Linux no-preload-20220701225718-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012] <==
	* I0701 22:57:44.359693       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 22:57:44.361586       1 cache.go:39] Caches are synced for autoregister controller
	I0701 22:57:44.417898       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0701 22:57:44.418504       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0701 22:57:44.418589       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0701 22:57:44.418642       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0701 22:57:44.418677       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 22:57:44.937777       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 22:57:45.263186       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0701 22:57:45.266534       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0701 22:57:45.266585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 22:57:45.755535       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 22:57:45.789142       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 22:57:45.863518       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0701 22:57:45.869086       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0701 22:57:45.870171       1 controller.go:611] quota admission added evaluator for: endpoints
	I0701 22:57:45.873910       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 22:57:46.404473       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0701 22:57:47.255908       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0701 22:57:47.263186       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0701 22:57:47.272732       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0701 22:57:47.350282       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 22:58:00.132849       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0701 22:58:00.481609       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0701 22:58:01.229093       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462] <==
	* I0701 22:57:59.430004       1 shared_informer.go:262] Caches are synced for endpoint
	I0701 22:57:59.430047       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0701 22:57:59.432355       1 shared_informer.go:262] Caches are synced for job
	I0701 22:57:59.437580       1 shared_informer.go:262] Caches are synced for PV protection
	I0701 22:57:59.523196       1 shared_informer.go:262] Caches are synced for taint
	I0701 22:57:59.523308       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0701 22:57:59.523359       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0701 22:57:59.523407       1 node_lifecycle_controller.go:1014] Missing timestamp for Node no-preload-20220701225718-10066. Assuming now as a timestamp.
	I0701 22:57:59.523479       1 event.go:294] "Event occurred" object="no-preload-20220701225718-10066" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-20220701225718-10066 event: Registered Node no-preload-20220701225718-10066 in Controller"
	I0701 22:57:59.523500       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0701 22:57:59.599326       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0701 22:57:59.611223       1 shared_informer.go:262] Caches are synced for stateful set
	I0701 22:57:59.625707       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 22:57:59.631783       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 22:57:59.679597       1 shared_informer.go:262] Caches are synced for daemon sets
	I0701 22:58:00.099864       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 22:58:00.128120       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 22:58:00.128144       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0701 22:58:00.134791       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0701 22:58:00.470246       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0701 22:58:00.486736       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5ck82"
	I0701 22:58:00.488364       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b5wkl"
	I0701 22:58:00.541423       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jzmvd"
	I0701 22:58:00.547729       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-mbfz4"
	I0701 22:58:00.567152       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-jzmvd"
	
	* 
	* ==> kube-proxy [b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8] <==
	* I0701 22:58:01.121577       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0701 22:58:01.121673       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0701 22:58:01.121706       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 22:58:01.224547       1 server_others.go:206] "Using iptables Proxier"
	I0701 22:58:01.224586       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 22:58:01.224598       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 22:58:01.224617       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 22:58:01.224645       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 22:58:01.224819       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 22:58:01.225041       1 server.go:661] "Version info" version="v1.24.2"
	I0701 22:58:01.225053       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 22:58:01.225770       1 config.go:226] "Starting endpoint slice config controller"
	I0701 22:58:01.225786       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 22:58:01.225872       1 config.go:317] "Starting service config controller"
	I0701 22:58:01.225877       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 22:58:01.226097       1 config.go:444] "Starting node config controller"
	I0701 22:58:01.226102       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 22:58:01.325962       1 shared_informer.go:262] Caches are synced for service config
	I0701 22:58:01.326036       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 22:58:01.326305       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8] <==
	* E0701 22:57:44.348537       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 22:57:44.348542       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:44.349659       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 22:57:44.349704       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 22:57:44.349737       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 22:57:44.349780       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 22:57:45.297253       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 22:57:45.297294       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 22:57:45.344779       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.344819       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:45.359826       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 22:57:45.359853       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 22:57:45.425348       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 22:57:45.425400       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 22:57:45.441898       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 22:57:45.441930       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 22:57:45.447744       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 22:57:45.447773       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 22:57:45.475136       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 22:57:45.475182       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 22:57:45.483371       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.483409       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:45.598153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.598194       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0701 22:57:47.044759       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 22:57:21 UTC, end at Fri 2022-07-01 23:02:02 UTC. --
	Jul 01 23:00:02 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:02.663369    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:07 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:07.664878    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:12 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:12.666291    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:17 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:17.667529    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:22 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:22.669115    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:27 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:27.670428    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:32 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:32.671355    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:37 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:37.672211    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:42 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:42.673879    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:46 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:00:46.853287    1741 scope.go:110] "RemoveContainer" containerID="a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d"
	Jul 01 23:00:47 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:47.674813    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:52 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:52.675891    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:00:57 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:00:57.679514    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:02 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:02.681108    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:07 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:07.682041    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:12 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:12.683430    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:17 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:17.684930    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:22 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:22.686096    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:27 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:27.686818    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:32 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:32.688397    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:37 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:37.689618    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:42 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:42.690786    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:47 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:47.692199    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:52 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:52.693154    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:01:57 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:01:57.694214    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
E0701 23:02:02.905283   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-mbfz4 storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-mbfz4 storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-mbfz4 storage-provisioner: exit status 1 (52.247603ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-mbfz4" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-mbfz4 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (284.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (278.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220701230032-10066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0701 23:00:34.503559   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 23:00:43.467385   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:43.472667   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:43.482893   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:43.502959   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:43.543304   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:43.623615   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:43.783989   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:44.104739   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:44.745505   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:46.026598   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:48.587794   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:51.855715   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:51.860971   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:51.871216   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:51.891478   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:51.932003   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:52.012329   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:52.173491   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:52.494105   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:53.135015   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:53.708685   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:00:54.415486   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:00:56.976070   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:01:02.096965   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:01:03.949883   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:12.337876   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:01:24.430671   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:32.818846   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:01:42.423511   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:42.428761   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:42.439004   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:42.459237   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:42.499501   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:42.579788   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:42.740175   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:43.060765   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:43.701600   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:44.982815   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:47.543478   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:47.697785   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:47.703026   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:47.713260   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:47.733513   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:47.773759   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:47.854051   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:48.014500   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:48.335035   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:48.975753   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:50.256165   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:52.664213   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:01:52.816620   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:01:57.936769   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:00.873532   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:00.878846   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:00.889074   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:00.909320   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:00.949555   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:01.029843   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220701230032-10066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: exit status 80 (4m36.50787258s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 23:00:32.356530  239469 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:00:32.356741  239469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:00:32.356752  239469 out.go:309] Setting ErrFile to fd 2...
	I0701 23:00:32.356757  239469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:00:32.357259  239469 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:00:32.357532  239469 out.go:303] Setting JSON to false
	I0701 23:00:32.359845  239469 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2585,"bootTime":1656713847,"procs":1325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:00:32.359924  239469 start.go:125] virtualization: kvm guest
	I0701 23:00:32.362432  239469 out.go:177] * [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:00:32.364367  239469 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:00:32.364374  239469 notify.go:193] Checking for updates...
	I0701 23:00:32.365884  239469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:00:32.367371  239469 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:00:32.368928  239469 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:00:32.370375  239469 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:00:32.372140  239469 config.go:178] Loaded profile config "embed-certs-20220701225830-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:00:32.372246  239469 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:00:32.372323  239469 config.go:178] Loaded profile config "old-k8s-version-20220701225700-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 23:00:32.372361  239469 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:00:32.414752  239469 docker.go:137] docker version: linux-20.10.17
	I0701 23:00:32.414847  239469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:00:32.524346  239469 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-01 23:00:32.44675508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:00:32.524469  239469 docker.go:254] overlay module found
	I0701 23:00:32.526463  239469 out.go:177] * Using the docker driver based on user configuration
	I0701 23:00:32.527806  239469 start.go:284] selected driver: docker
	I0701 23:00:32.527824  239469 start.go:808] validating driver "docker" against <nil>
	I0701 23:00:32.527842  239469 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:00:32.529035  239469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:00:32.639135  239469 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-01 23:00:32.562082913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:00:32.639255  239469 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0701 23:00:32.639406  239469 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:00:32.641295  239469 out.go:177] * Using Docker driver with root privileges
	I0701 23:00:32.642763  239469 cni.go:95] Creating CNI manager for ""
	I0701 23:00:32.642792  239469 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:00:32.642813  239469 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 23:00:32.642827  239469 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:00:32.644523  239469 out.go:177] * Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	I0701 23:00:32.645977  239469 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:00:32.647607  239469 out.go:177] * Pulling base image ...
	I0701 23:00:32.649208  239469 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:00:32.649244  239469 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:00:32.649258  239469 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:00:32.649266  239469 cache.go:57] Caching tarball of preloaded images
	I0701 23:00:32.649518  239469 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:00:32.649550  239469 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:00:32.649687  239469 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:00:32.649713  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json: {Name:mke93ae23ec1465a166017f8899d6d9873d4cc00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:32.683956  239469 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:00:32.683989  239469 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:00:32.684005  239469 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:00:32.684043  239469 start.go:352] acquiring machines lock for default-k8s-different-port-20220701230032-10066: {Name:mk7518221e8259d073969ba977a5dbef99fe5935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:00:32.684161  239469 start.go:356] acquired machines lock for "default-k8s-different-port-20220701230032-10066" in 100.96µs
	I0701 23:00:32.684184  239469 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-po
rt-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:00:32.684269  239469 start.go:131] createHost starting for "" (driver="docker")
	I0701 23:00:32.686619  239469 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0701 23:00:32.686845  239469 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:00:32.686876  239469 client.go:168] LocalClient.Create starting
	I0701 23:00:32.686948  239469 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem
	I0701 23:00:32.686986  239469 main.go:134] libmachine: Decoding PEM data...
	I0701 23:00:32.687009  239469 main.go:134] libmachine: Parsing certificate...
	I0701 23:00:32.687080  239469 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem
	I0701 23:00:32.687102  239469 main.go:134] libmachine: Decoding PEM data...
	I0701 23:00:32.687116  239469 main.go:134] libmachine: Parsing certificate...
	I0701 23:00:32.687434  239469 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0701 23:00:32.719427  239469 cli_runner.go:211] docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0701 23:00:32.719495  239469 network_create.go:272] running [docker network inspect default-k8s-different-port-20220701230032-10066] to gather additional debugging logs...
	I0701 23:00:32.719519  239469 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066
	W0701 23:00:32.752806  239469 cli_runner.go:211] docker network inspect default-k8s-different-port-20220701230032-10066 returned with exit code 1
	I0701 23:00:32.752848  239469 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220701230032-10066]: docker network inspect default-k8s-different-port-20220701230032-10066: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220701230032-10066
	I0701 23:00:32.752877  239469 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220701230032-10066]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220701230032-10066
	
	** /stderr **
	I0701 23:00:32.752943  239469 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:00:32.786346  239469 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b090b5bc601e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8d:0d:43:b9}}
	I0701 23:00:32.787222  239469 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-585a063a32f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:48:75:30:46}}
	I0701 23:00:32.787935  239469 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-ad316ca52a99 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:19:e0:88:c5}}
	I0701 23:00:32.789206  239469 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0009fa010] misses:0}
	I0701 23:00:32.789264  239469 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0701 23:00:32.789282  239469 network_create.go:115] attempt to create docker network default-k8s-different-port-20220701230032-10066 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0701 23:00:32.789350  239469 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 default-k8s-different-port-20220701230032-10066
	I0701 23:00:32.862393  239469 network_create.go:99] docker network default-k8s-different-port-20220701230032-10066 192.168.76.0/24 created
	I0701 23:00:32.862432  239469 kic.go:106] calculated static IP "192.168.76.2" for the "default-k8s-different-port-20220701230032-10066" container
	I0701 23:00:32.862498  239469 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0701 23:00:32.897872  239469 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220701230032-10066 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 --label created_by.minikube.sigs.k8s.io=true
	I0701 23:00:32.932471  239469 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220701230032-10066
	I0701 23:00:32.932560  239469 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220701230032-10066-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 --entrypoint /usr/bin/test -v default-k8s-different-port-20220701230032-10066:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0701 23:00:33.531943  239469 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220701230032-10066
	I0701 23:00:33.531997  239469 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:00:33.532020  239469 kic.go:179] Starting extracting preloaded images to volume ...
	I0701 23:00:33.532077  239469 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220701230032-10066:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0701 23:00:40.268594  239469 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220701230032-10066:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (6.73646927s)
	I0701 23:00:40.268627  239469 kic.go:188] duration metric: took 6.736603 seconds to extract preloaded images to volume
	W0701 23:00:40.268766  239469 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0701 23:00:40.268873  239469 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0701 23:00:40.376391  239469 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220701230032-10066 --name default-k8s-different-port-20220701230032-10066 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220701230032-10066 --network default-k8s-different-port-20220701230032-10066 --ip 192.168.76.2 --volume default-k8s-different-port-20220701230032-10066:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c
7a6b038ca4e
	I0701 23:00:40.794054  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Running}}
	I0701 23:00:40.832495  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:00:40.867253  239469 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220701230032-10066 stat /var/lib/dpkg/alternatives/iptables
	I0701 23:00:40.961011  239469 oci.go:144] the created container "default-k8s-different-port-20220701230032-10066" has a running status.
	I0701 23:00:40.961049  239469 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa...
	I0701 23:00:41.103356  239469 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0701 23:00:41.198323  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:00:41.239991  239469 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0701 23:00:41.240014  239469 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220701230032-10066 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0701 23:00:41.330485  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:00:41.365577  239469 machine.go:88] provisioning docker machine ...
	I0701 23:00:41.365634  239469 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220701230032-10066"
	I0701 23:00:41.365691  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:41.401454  239469 main.go:134] libmachine: Using SSH client type: native
	I0701 23:00:41.401653  239469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0701 23:00:41.401676  239469 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220701230032-10066 && echo "default-k8s-different-port-20220701230032-10066" | sudo tee /etc/hostname
	I0701 23:00:41.535055  239469 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220701230032-10066
	
	I0701 23:00:41.535123  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:41.569248  239469 main.go:134] libmachine: Using SSH client type: native
	I0701 23:00:41.569386  239469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0701 23:00:41.569409  239469 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220701230032-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220701230032-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220701230032-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:00:41.686222  239469 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:00:41.686257  239469 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:00:41.686279  239469 ubuntu.go:177] setting up certificates
	I0701 23:00:41.686286  239469 provision.go:83] configureAuth start
	I0701 23:00:41.686335  239469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:00:41.719108  239469 provision.go:138] copyHostCerts
	I0701 23:00:41.719173  239469 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:00:41.719187  239469 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:00:41.719254  239469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:00:41.719326  239469 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:00:41.719336  239469 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:00:41.719363  239469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:00:41.719412  239469 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:00:41.719420  239469 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:00:41.719445  239469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:00:41.719487  239469 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220701230032-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220701230032-10066]
	I0701 23:00:41.830942  239469 provision.go:172] copyRemoteCerts
	I0701 23:00:41.830995  239469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:00:41.831027  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:41.864512  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:41.949951  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:00:41.967624  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0701 23:00:41.985011  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 23:00:42.002230  239469 provision.go:86] duration metric: configureAuth took 315.931184ms
	I0701 23:00:42.002253  239469 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:00:42.002393  239469 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:00:42.002406  239469 machine.go:91] provisioned docker machine in 636.794282ms
	I0701 23:00:42.002411  239469 client.go:171] LocalClient.Create took 9.31553155s
	I0701 23:00:42.002430  239469 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220701230032-10066" took 9.315581603s
	I0701 23:00:42.002441  239469 start.go:306] post-start starting for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:00:42.002446  239469 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:00:42.002486  239469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:00:42.002522  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.035732  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:42.121955  239469 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:00:42.124965  239469 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:00:42.124995  239469 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:00:42.125011  239469 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:00:42.125023  239469 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:00:42.125034  239469 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:00:42.125087  239469 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:00:42.125172  239469 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:00:42.125282  239469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:00:42.132257  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:00:42.151720  239469 start.go:309] post-start completed in 149.270154ms
	I0701 23:00:42.152035  239469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.190634  239469 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:00:42.190862  239469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:00:42.190902  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.223968  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:42.310575  239469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:00:42.314378  239469 start.go:134] duration metric: createHost completed in 9.630096746s
	I0701 23:00:42.314401  239469 start.go:81] releasing machines lock for "default-k8s-different-port-20220701230032-10066", held for 9.63022846s
	I0701 23:00:42.314486  239469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.348038  239469 ssh_runner.go:195] Run: systemctl --version
	I0701 23:00:42.348081  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.348103  239469 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:00:42.348167  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:00:42.386661  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:42.387446  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:00:42.471143  239469 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:00:42.492277  239469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:00:42.501455  239469 docker.go:179] disabling docker service ...
	I0701 23:00:42.501510  239469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:00:42.517379  239469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:00:42.526463  239469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:00:42.608061  239469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:00:42.690439  239469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:00:42.699282  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:00:42.711133  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:00:42.718341  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:00:42.725755  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:00:42.733903  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:00:42.741339  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:00:42.748827  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:00:42.760667  239469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:00:42.767095  239469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:00:42.773015  239469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:00:42.851570  239469 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:00:42.924491  239469 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:00:42.924562  239469 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:00:42.928054  239469 start.go:471] Will wait 60s for crictl version
	I0701 23:00:42.928115  239469 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:00:42.954643  239469 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:00:42.954717  239469 ssh_runner.go:195] Run: containerd --version
	I0701 23:00:42.984104  239469 ssh_runner.go:195] Run: containerd --version
	I0701 23:00:43.016309  239469 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:00:43.017720  239469 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:00:43.050698  239469 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 23:00:43.053837  239469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:00:43.063525  239469 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:00:43.063575  239469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:00:43.086821  239469 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:00:43.086842  239469 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:00:43.086882  239469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:00:43.110586  239469 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:00:43.110607  239469 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:00:43.110657  239469 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:00:43.133264  239469 cni.go:95] Creating CNI manager for ""
	I0701 23:00:43.133286  239469 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:00:43.133296  239469 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:00:43.133307  239469 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220701230032-10066 NodeName:default-k8s-different-port-20220701230032-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:00:43.133435  239469 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220701230032-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:00:43.133515  239469 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220701230032-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0701 23:00:43.133556  239469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:00:43.141170  239469 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:00:43.141232  239469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:00:43.148417  239469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0701 23:00:43.161136  239469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:00:43.173351  239469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0701 23:00:43.185576  239469 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:00:43.188303  239469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:00:43.197060  239469 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066 for IP: 192.168.76.2
	I0701 23:00:43.197145  239469 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:00:43.197177  239469 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:00:43.197225  239469 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key
	I0701 23:00:43.197241  239469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.crt with IP's: []
	I0701 23:00:43.348543  239469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.crt ...
	I0701 23:00:43.348577  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.crt: {Name:mkc44243c9191651565000054c142b08a17f2e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.348812  239469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key ...
	I0701 23:00:43.348830  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key: {Name:mk281a4b798c59f66934c331b74ffc5b9c596ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.348970  239469 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25
	I0701 23:00:43.348991  239469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0701 23:00:43.491315  239469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt.31bdca25 ...
	I0701 23:00:43.491344  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt.31bdca25: {Name:mk4fb2ce24220cb60283d06e92e48e13cb204171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.491518  239469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25 ...
	I0701 23:00:43.491537  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25: {Name:mkaad5f86ff30d4b24ba91ec365805c937e68259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.491673  239469 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt
	I0701 23:00:43.491739  239469 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key
	I0701 23:00:43.491784  239469 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key
	I0701 23:00:43.491819  239469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt with IP's: []
	I0701 23:00:43.800330  239469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt ...
	I0701 23:00:43.800358  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt: {Name:mk35e707e41436cad5d092ac8ad811e177fe2cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.800558  239469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key ...
	I0701 23:00:43.800575  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key: {Name:mk2c11cc887d5c49c0facae0cc376a37aace8a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:00:43.800806  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:00:43.800854  239469 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:00:43.800874  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:00:43.800909  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:00:43.800942  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:00:43.800976  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:00:43.801033  239469 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:00:43.801572  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:00:43.819814  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:00:43.837329  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:00:43.854741  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0701 23:00:43.871447  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:00:43.887796  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:00:43.903812  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:00:43.920103  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:00:43.936429  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:00:43.953084  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:00:43.969547  239469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:00:43.986383  239469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:00:43.998529  239469 ssh_runner.go:195] Run: openssl version
	I0701 23:00:44.003011  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:00:44.009754  239469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:00:44.012613  239469 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:00:44.012658  239469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:00:44.017062  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:00:44.023919  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:00:44.030894  239469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:00:44.033664  239469 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:00:44.033708  239469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:00:44.038436  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:00:44.046209  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:00:44.053628  239469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:44.056627  239469 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:44.056668  239469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:00:44.061040  239469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:00:44.068266  239469 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-1006
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:00:44.068354  239469 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:00:44.068399  239469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:00:44.092472  239469 cri.go:87] found id: ""
	I0701 23:00:44.092532  239469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:00:44.099473  239469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:00:44.106313  239469 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:00:44.106364  239469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:00:44.112690  239469 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:00:44.112726  239469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:00:55.027861  239469 out.go:204]   - Generating certificates and keys ...
	I0701 23:00:55.031084  239469 out.go:204]   - Booting up control plane ...
	I0701 23:00:55.033895  239469 out.go:204]   - Configuring RBAC rules ...
	I0701 23:00:55.036373  239469 cni.go:95] Creating CNI manager for ""
	I0701 23:00:55.036397  239469 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:00:55.038256  239469 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:00:55.039627  239469 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:00:55.043753  239469 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:00:55.043774  239469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:00:55.058875  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:00:55.944808  239469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:00:55.944900  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066 minikube.k8s.io/updated_at=2022_07_01T23_00_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:55.944905  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:56.049417  239469 ops.go:34] apiserver oom_adj: -16
	I0701 23:00:56.049498  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:56.627360  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:57.127671  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:57.627028  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:58.127761  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:58.627765  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:59.127760  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:00:59.627280  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:00.127784  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:00.627744  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:01.127743  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:01.627619  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:02.127782  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:02.627395  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:03.127748  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:03.627304  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:04.127777  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:04.627576  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:05.127612  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:05.627425  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:06.127499  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:06.627771  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:07.127752  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:07.626972  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:08.126852  239469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:01:08.186347  239469 kubeadm.go:1045] duration metric: took 12.241501127s to wait for elevateKubeSystemPrivileges.
	I0701 23:01:08.186384  239469 kubeadm.go:397] StartCluster complete in 24.118125112s
	I0701 23:01:08.186406  239469 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:01:08.186506  239469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:01:08.187848  239469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:01:08.702475  239469 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220701230032-10066" rescaled to 1
	I0701 23:01:08.702522  239469 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:01:08.704547  239469 out.go:177] * Verifying Kubernetes components...
	I0701 23:01:08.702601  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:01:08.702617  239469 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0701 23:01:08.702761  239469 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:01:08.705966  239469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:01:08.706005  239469 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:01:08.706035  239469 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:01:08.706048  239469 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:01:08.706044  239469 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:01:08.706068  239469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220701230032-10066"
	I0701 23:01:08.706093  239469 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:01:08.706457  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:01:08.706603  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:01:08.753029  239469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:01:08.754122  239469 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:01:08.754509  239469 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:01:08.754587  239469 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:01:08.754490  239469 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:01:08.754619  239469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:01:08.754669  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:01:08.755065  239469 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:01:08.781972  239469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:01:08.783498  239469 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:01:08.798287  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:01:08.801197  239469 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:01:08.801219  239469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:01:08.801266  239469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:01:08.840063  239469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:01:08.933105  239469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:01:08.974642  239469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:01:09.049107  239469 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0701 23:01:09.334904  239469 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0701 23:01:09.336309  239469 addons.go:414] enableAddons completed in 633.692494ms
	I0701 23:01:10.790274  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:12.790423  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:14.790524  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:17.290624  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:19.291275  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:21.792400  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:24.290310  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:26.290429  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:28.290866  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:30.790694  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:33.290139  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:35.290712  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:37.790386  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:39.790596  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:42.290903  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:44.790340  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:46.790403  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:49.291032  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:51.292650  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:53.790108  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:55.790393  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:01:57.790448  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:00.290464  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:02.790631  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:04.790785  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:07.289941  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:09.290261  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:11.290692  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:13.790279  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:15.790342  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:17.790833  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:20.290289  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:22.290508  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:24.790098  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:26.790416  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:28.790604  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:31.290606  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:33.790284  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:36.290428  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:38.291165  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:40.790399  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:43.290431  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:45.790126  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:47.791057  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:50.290350  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:52.290607  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:54.790141  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:57.290302  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:02:59.790317  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:02.290224  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:04.290419  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:06.290534  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:08.291140  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:10.790511  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:12.790867  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:15.290576  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:17.790012  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:19.790585  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:22.290396  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:24.790375  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:26.790489  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:28.790831  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:31.290582  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:33.791141  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:36.290298  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:38.290435  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:40.290509  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:42.290583  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:44.790297  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:47.290396  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:49.290852  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:51.791243  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:54.290711  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:56.790617  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:58.790654  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:01.291038  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:03.790393  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:06.290353  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:08.790188  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:10.790582  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:13.290343  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:15.290588  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:17.790523  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:19.790597  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:22.290303  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:24.290501  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:26.790371  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:29.290654  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:31.790397  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:34.290670  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:36.790398  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:39.290427  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:41.291233  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:43.790939  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:46.290653  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:48.790068  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:51.291112  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:53.790558  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:56.290612  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:58.291143  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:00.790613  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:03.290473  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:05.290586  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:07.790365  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:08.792772  239469 node_ready.go:38] duration metric: took 4m0.009239027s waiting for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:05:08.795025  239469 out.go:177] 
	W0701 23:05:08.796589  239469 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:05:08.796607  239469 out.go:239] * 
	* 
	W0701 23:05:08.797333  239469 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:05:08.798802  239469 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220701230032-10066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220701230032-10066
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220701230032-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93",
	        "Created": "2022-07-01T23:00:40.408283404Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T23:00:40.782604309Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hostname",
	        "HostsPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hosts",
	        "LogPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93-json.log",
	        "Name": "/default-k8s-different-port-20220701230032-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220701230032-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220701230032-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220701230032-10066",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220701230032-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220701230032-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b84131f0a443f3e46a27c4a53bbb599561e5894a5499246152418e29a547de10",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b84131f0a443",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220701230032-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "261fd4f89726",
	                        "default-k8s-different-port-20220701230032-10066"
	                    ],
	                    "NetworkID": "08b054338871e09e9987c4187ebe43c21ee49646be113b14ac2205c8647ea77d",
	                    "EndpointID": "dc3e5e6cc3047caf3c0c1415491005074769713a8b3dbbad0e642c61ea3eecd8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220701230032-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC |                     |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --enable-default-cni=true                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| start   | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| ssh     | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| ssh     | -p calico-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p calico-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
	| delete  | -p bridge-20220701225120-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:57 UTC |
	| start   | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:58 UTC |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                      |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 23:02 UTC |
	|         | old-k8s-version-20220701225700-10066              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 22:57 UTC |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 22:57 UTC |
	|         | enable-default-cni-20220701225120-10066           |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC |                     |
	|         | no-preload-20220701225718-10066                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:58 UTC |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p cilium-20220701225121-10066                    | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:58 UTC |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC |                     |
	|         | embed-certs-20220701225830-10066                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | kubernetes-upgrade-20220701225105-10066           |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | disable-driver-mounts-20220701230032-10066        |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:02 UTC |
	|         | old-k8s-version-20220701225700-10066              |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066              |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066              |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC |                     |
	|         | old-k8s-version-20220701225700-10066              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:03:07
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:03:07.893768  245311 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:03:07.893867  245311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:03:07.893875  245311 out.go:309] Setting ErrFile to fd 2...
	I0701 23:03:07.893880  245311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:03:07.894292  245311 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:03:07.894520  245311 out.go:303] Setting JSON to false
	I0701 23:03:07.896335  245311 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2741,"bootTime":1656713847,"procs":710,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:03:07.896402  245311 start.go:125] virtualization: kvm guest
	I0701 23:03:07.900224  245311 out.go:177] * [old-k8s-version-20220701225700-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:03:07.901585  245311 notify.go:193] Checking for updates...
	I0701 23:03:07.902991  245311 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:03:07.904446  245311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:03:07.905843  245311 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:03:07.907236  245311 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:03:07.908540  245311 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:03:07.910157  245311 config.go:178] Loaded profile config "old-k8s-version-20220701225700-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 23:03:07.911993  245311 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	I0701 23:03:07.913279  245311 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:03:07.968140  245311 docker.go:137] docker version: linux-20.10.17
	I0701 23:03:07.968222  245311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:03:08.074258  245311 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-01 23:03:07.997834323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:03:08.074371  245311 docker.go:254] overlay module found
	I0701 23:03:08.077114  245311 out.go:177] * Using the docker driver based on existing profile
	I0701 23:03:08.078527  245311 start.go:284] selected driver: docker
	I0701 23:03:08.078565  245311 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220701225700-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220701225700-10066 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:tru
e kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:03:08.078661  245311 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:03:08.079487  245311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:03:08.187918  245311 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-01 23:03:08.110466732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:03:08.188157  245311 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:03:08.188178  245311 cni.go:95] Creating CNI manager for ""
	I0701 23:03:08.188186  245311 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:03:08.188208  245311 start_flags.go:310] config:
	{Name:old-k8s-version-20220701225700-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220701225700-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Scheduled
Stop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:03:08.190346  245311 out.go:177] * Starting control plane node old-k8s-version-20220701225700-10066 in cluster old-k8s-version-20220701225700-10066
	I0701 23:03:08.192019  245311 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:03:08.193512  245311 out.go:177] * Pulling base image ...
	I0701 23:03:03.649921  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:06.148952  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:08.150214  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:08.194977  245311 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0701 23:03:08.195015  245311 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:03:08.195039  245311 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0701 23:03:08.195068  245311 cache.go:57] Caching tarball of preloaded images
	I0701 23:03:08.195332  245311 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:03:08.195370  245311 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0701 23:03:08.195516  245311 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/config.json ...
	I0701 23:03:08.236520  245311 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:03:08.236550  245311 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:03:08.236562  245311 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:03:08.236598  245311 start.go:352] acquiring machines lock for old-k8s-version-20220701225700-10066: {Name:mkd48005813cb8b9d7d6ba0c322640aa75f33e18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:03:08.236685  245311 start.go:356] acquired machines lock for "old-k8s-version-20220701225700-10066" in 70.204µs
	I0701 23:03:08.236704  245311 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:03:08.236708  245311 fix.go:55] fixHost starting: 
	I0701 23:03:08.236922  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:03:08.271818  245311 fix.go:103] recreateIfNeeded on old-k8s-version-20220701225700-10066: state=Stopped err=<nil>
	W0701 23:03:08.271846  245311 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:03:08.274187  245311 out.go:177] * Restarting existing docker container for "old-k8s-version-20220701225700-10066" ...
	I0701 23:03:08.291140  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:10.790511  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:08.275545  245311 cli_runner.go:164] Run: docker start old-k8s-version-20220701225700-10066
	I0701 23:03:08.664070  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:03:08.700678  245311 kic.go:416] container "old-k8s-version-20220701225700-10066" state is running.
	I0701 23:03:08.701062  245311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220701225700-10066
	I0701 23:03:08.733773  245311 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/config.json ...
	I0701 23:03:08.734039  245311 machine.go:88] provisioning docker machine ...
	I0701 23:03:08.734066  245311 ubuntu.go:169] provisioning hostname "old-k8s-version-20220701225700-10066"
	I0701 23:03:08.734117  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:03:08.768791  245311 main.go:134] libmachine: Using SSH client type: native
	I0701 23:03:08.768996  245311 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0701 23:03:08.769016  245311 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220701225700-10066 && echo "old-k8s-version-20220701225700-10066" | sudo tee /etc/hostname
	I0701 23:03:08.769671  245311 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45920->127.0.0.1:49422: read: connection reset by peer
	I0701 23:03:11.894828  245311 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220701225700-10066
	
	I0701 23:03:11.894903  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:03:11.929596  245311 main.go:134] libmachine: Using SSH client type: native
	I0701 23:03:11.929754  245311 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0701 23:03:11.929779  245311 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220701225700-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220701225700-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220701225700-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:03:12.046009  245311 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:03:12.046041  245311 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:03:12.046084  245311 ubuntu.go:177] setting up certificates
	I0701 23:03:12.046093  245311 provision.go:83] configureAuth start
	I0701 23:03:12.046138  245311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220701225700-10066
	I0701 23:03:12.081219  245311 provision.go:138] copyHostCerts
	I0701 23:03:12.081281  245311 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:03:12.081297  245311 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:03:12.081362  245311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:03:12.081464  245311 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:03:12.081479  245311 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:03:12.081526  245311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:03:12.081608  245311 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:03:12.081643  245311 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:03:12.081714  245311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:03:12.081834  245311 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220701225700-10066 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220701225700-10066]
	I0701 23:03:12.297758  245311 provision.go:172] copyRemoteCerts
	I0701 23:03:12.297809  245311 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:03:12.297841  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:03:12.331885  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:03:12.417970  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:03:12.436890  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0701 23:03:12.453606  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 23:03:12.470388  245311 provision.go:86] duration metric: configureAuth took 424.282005ms
	I0701 23:03:12.470409  245311 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:03:12.470626  245311 config.go:178] Loaded profile config "old-k8s-version-20220701225700-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 23:03:12.470645  245311 machine.go:91] provisioned docker machine in 3.73659087s
	I0701 23:03:12.470653  245311 start.go:306] post-start starting for "old-k8s-version-20220701225700-10066" (driver="docker")
	I0701 23:03:12.470661  245311 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:03:12.470716  245311 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:03:12.470766  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:03:12.505942  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:03:12.589912  245311 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:03:12.592525  245311 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:03:12.592551  245311 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:03:12.592562  245311 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:03:12.592568  245311 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:03:12.592579  245311 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:03:12.592634  245311 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:03:12.592744  245311 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:03:12.592852  245311 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:03:12.599183  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:03:12.616041  245311 start.go:309] post-start completed in 145.377277ms
	I0701 23:03:12.616109  245311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:03:12.616151  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:03:12.652513  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:03:12.734586  245311 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:03:12.738345  245311 fix.go:57] fixHost completed within 4.501630955s
	I0701 23:03:12.738369  245311 start.go:81] releasing machines lock for "old-k8s-version-20220701225700-10066", held for 4.501671268s
	I0701 23:03:12.738451  245311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220701225700-10066
	I0701 23:03:12.772060  245311 ssh_runner.go:195] Run: systemctl --version
	I0701 23:03:12.772090  245311 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:03:12.772106  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:03:12.772136  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:03:12.808246  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:03:12.809422  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:03:10.650067  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:12.650617  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:12.790867  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:15.290576  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:12.910158  245311 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:03:12.921165  245311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:03:12.931085  245311 docker.go:179] disabling docker service ...
	I0701 23:03:12.931136  245311 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:03:12.940567  245311 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:03:12.949046  245311 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:03:13.025801  245311 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:03:13.106403  245311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:03:13.115850  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:03:13.128011  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.1"|' -i /etc/containerd/config.toml"
	I0701 23:03:13.135774  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:03:13.143550  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:03:13.152513  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:03:13.160042  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:03:13.167998  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:03:13.180421  245311 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:03:13.186591  245311 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:03:13.192691  245311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:03:13.267839  245311 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:03:13.340440  245311 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:03:13.340522  245311 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:03:13.344660  245311 start.go:471] Will wait 60s for crictl version
	I0701 23:03:13.344725  245311 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:03:13.369565  245311 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:03:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:03:14.651524  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:17.148904  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:17.790012  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:19.790585  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:22.290396  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:19.648976  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:21.650153  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:24.417599  245311 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:03:24.440691  245311 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:03:24.440741  245311 ssh_runner.go:195] Run: containerd --version
	I0701 23:03:24.469959  245311 ssh_runner.go:195] Run: containerd --version
	I0701 23:03:24.500498  245311 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.6.6 ...
	I0701 23:03:24.501737  245311 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220701225700-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:03:24.536768  245311 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0701 23:03:24.540199  245311 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:03:24.551203  245311 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0701 23:03:24.790375  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:26.790489  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:24.552420  245311 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0701 23:03:24.552480  245311 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:03:24.575611  245311 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:03:24.575629  245311 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:03:24.575663  245311 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:03:24.598887  245311 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:03:24.598906  245311 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:03:24.598953  245311 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:03:24.622033  245311 cni.go:95] Creating CNI manager for ""
	I0701 23:03:24.622058  245311 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:03:24.622074  245311 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:03:24.622087  245311 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220701225700-10066 NodeName:old-k8s-version-20220701225700-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.85.2 CgroupDriver:cgroup
fs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:03:24.622247  245311 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220701225700-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220701225700-10066
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:03:24.622340  245311 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220701225700-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220701225700-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 23:03:24.622393  245311 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0701 23:03:24.629624  245311 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:03:24.629691  245311 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:03:24.636260  245311 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0701 23:03:24.649080  245311 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:03:24.662009  245311 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0701 23:03:24.674418  245311 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:03:24.677125  245311 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:03:24.686568  245311 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066 for IP: 192.168.85.2
	I0701 23:03:24.686688  245311 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:03:24.686731  245311 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:03:24.686800  245311 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/client.key
	I0701 23:03:24.686870  245311 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/apiserver.key.43b9df8c
	I0701 23:03:24.686915  245311 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/proxy-client.key
	I0701 23:03:24.687021  245311 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:03:24.687057  245311 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:03:24.687071  245311 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:03:24.687107  245311 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:03:24.687137  245311 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:03:24.687169  245311 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:03:24.687212  245311 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:03:24.687831  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:03:24.704914  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:03:24.721508  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:03:24.737738  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 23:03:24.754505  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:03:24.771073  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:03:24.787703  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:03:24.804365  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:03:24.820449  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:03:24.836592  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:03:24.853170  245311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:03:24.869491  245311 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:03:24.881586  245311 ssh_runner.go:195] Run: openssl version
	I0701 23:03:24.886010  245311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:03:24.892833  245311 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:03:24.895662  245311 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:03:24.895725  245311 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:03:24.900309  245311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:03:24.906424  245311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:03:24.913170  245311 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:03:24.915972  245311 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:03:24.916012  245311 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:03:24.920411  245311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:03:24.926918  245311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:03:24.934042  245311 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:03:24.937163  245311 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:03:24.937201  245311 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:03:24.941656  245311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:03:24.948048  245311 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220701225700-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220701225700-10066 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_
ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:03:24.948177  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:03:24.948225  245311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:03:24.972567  245311 cri.go:87] found id: "d3c2822878e3d5c0812cc2fac3f29d13bedf459f2490040a9bc709660cae4a03"
	I0701 23:03:24.972584  245311 cri.go:87] found id: "5821a5be2b076326c4404b246ac6fafef022db278f55fdd976b2f9cb1c322cf1"
	I0701 23:03:24.972591  245311 cri.go:87] found id: "6308a404bf804f800f0b307307e6f622c7d1cccd7c98944b894a63c5ec983436"
	I0701 23:03:24.972597  245311 cri.go:87] found id: "c86b26263a450111cf26e0ea1b7ff3d4fb01ae656d96b5500a31c67f81eead58"
	I0701 23:03:24.972603  245311 cri.go:87] found id: "d46802ac00b19dfae9ae0654b5a3d230856b398d11938e46fa8ff7785340d56d"
	I0701 23:03:24.972608  245311 cri.go:87] found id: "0b1287841773f64e1d50ed62a432bb982e8e42d4efe54d9062762f737714f293"
	I0701 23:03:24.972614  245311 cri.go:87] found id: "7a40c03b0dd9857db9217af4b5be692b1fab0828003b853a0eaeb2fbc66076e4"
	I0701 23:03:24.972621  245311 cri.go:87] found id: "564a57dc61efaca27acb44e1646ac9d33064955e9ad9e735d8097d2d73c25722"
	I0701 23:03:24.972630  245311 cri.go:87] found id: ""
	I0701 23:03:24.972659  245311 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:03:24.984676  245311 cri.go:114] JSON = null
	W0701 23:03:24.984724  245311 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0701 23:03:24.984784  245311 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:03:24.991366  245311 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:03:24.991391  245311 kubeadm.go:626] restartCluster start
	I0701 23:03:24.991422  245311 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:03:24.998040  245311 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:24.998845  245311 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220701225700-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:03:24.999301  245311 kubeconfig.go:127] "old-k8s-version-20220701225700-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:03:24.999919  245311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:03:25.001297  245311 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:03:25.007607  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:25.007644  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:25.015017  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:25.215408  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:25.215493  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:25.224423  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:25.415713  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:25.415775  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:25.424534  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:25.615926  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:25.615986  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:25.624767  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:25.816039  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:25.816122  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:25.825067  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:26.015281  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:26.015365  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:26.023715  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:26.216006  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:26.216089  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:26.225142  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:26.415438  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:26.415528  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:26.424229  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:26.615522  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:26.615607  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:26.624326  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:26.815600  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:26.815665  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:26.823989  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:27.015221  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:27.015302  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:27.023785  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:27.216076  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:27.216152  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:27.224646  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:27.415938  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:27.416003  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:27.424533  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:27.615707  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:27.615766  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:27.623954  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:27.815167  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:27.815269  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:27.823664  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:24.149095  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:26.650206  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:28.790831  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:31.290582  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:28.016128  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:28.016215  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:28.024730  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:28.024749  245311 api_server.go:165] Checking apiserver status ...
	I0701 23:03:28.024781  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:03:28.032192  245311 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:03:28.032215  245311 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:03:28.032221  245311 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:03:28.032231  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:03:28.032271  245311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:03:28.055127  245311 cri.go:87] found id: "d3c2822878e3d5c0812cc2fac3f29d13bedf459f2490040a9bc709660cae4a03"
	I0701 23:03:28.055155  245311 cri.go:87] found id: "5821a5be2b076326c4404b246ac6fafef022db278f55fdd976b2f9cb1c322cf1"
	I0701 23:03:28.055166  245311 cri.go:87] found id: "6308a404bf804f800f0b307307e6f622c7d1cccd7c98944b894a63c5ec983436"
	I0701 23:03:28.055176  245311 cri.go:87] found id: "c86b26263a450111cf26e0ea1b7ff3d4fb01ae656d96b5500a31c67f81eead58"
	I0701 23:03:28.055186  245311 cri.go:87] found id: "d46802ac00b19dfae9ae0654b5a3d230856b398d11938e46fa8ff7785340d56d"
	I0701 23:03:28.055195  245311 cri.go:87] found id: "0b1287841773f64e1d50ed62a432bb982e8e42d4efe54d9062762f737714f293"
	I0701 23:03:28.055204  245311 cri.go:87] found id: "7a40c03b0dd9857db9217af4b5be692b1fab0828003b853a0eaeb2fbc66076e4"
	I0701 23:03:28.055218  245311 cri.go:87] found id: "564a57dc61efaca27acb44e1646ac9d33064955e9ad9e735d8097d2d73c25722"
	I0701 23:03:28.055224  245311 cri.go:87] found id: ""
	I0701 23:03:28.055229  245311 cri.go:232] Stopping containers: [d3c2822878e3d5c0812cc2fac3f29d13bedf459f2490040a9bc709660cae4a03 5821a5be2b076326c4404b246ac6fafef022db278f55fdd976b2f9cb1c322cf1 6308a404bf804f800f0b307307e6f622c7d1cccd7c98944b894a63c5ec983436 c86b26263a450111cf26e0ea1b7ff3d4fb01ae656d96b5500a31c67f81eead58 d46802ac00b19dfae9ae0654b5a3d230856b398d11938e46fa8ff7785340d56d 0b1287841773f64e1d50ed62a432bb982e8e42d4efe54d9062762f737714f293 7a40c03b0dd9857db9217af4b5be692b1fab0828003b853a0eaeb2fbc66076e4 564a57dc61efaca27acb44e1646ac9d33064955e9ad9e735d8097d2d73c25722]
	I0701 23:03:28.055267  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:03:28.057913  245311 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop d3c2822878e3d5c0812cc2fac3f29d13bedf459f2490040a9bc709660cae4a03 5821a5be2b076326c4404b246ac6fafef022db278f55fdd976b2f9cb1c322cf1 6308a404bf804f800f0b307307e6f622c7d1cccd7c98944b894a63c5ec983436 c86b26263a450111cf26e0ea1b7ff3d4fb01ae656d96b5500a31c67f81eead58 d46802ac00b19dfae9ae0654b5a3d230856b398d11938e46fa8ff7785340d56d 0b1287841773f64e1d50ed62a432bb982e8e42d4efe54d9062762f737714f293 7a40c03b0dd9857db9217af4b5be692b1fab0828003b853a0eaeb2fbc66076e4 564a57dc61efaca27acb44e1646ac9d33064955e9ad9e735d8097d2d73c25722
	I0701 23:03:28.083142  245311 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:03:28.093041  245311 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:03:28.099700  245311 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jul  1 22:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jul  1 22:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Jul  1 22:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Jul  1 22:57 /etc/kubernetes/scheduler.conf
	
	I0701 23:03:28.099746  245311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 23:03:28.106136  245311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 23:03:28.112509  245311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 23:03:28.118754  245311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 23:03:28.124921  245311 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:03:28.131186  245311 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:03:28.131205  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:03:28.190509  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:03:28.946705  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:03:29.093881  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:03:29.154901  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:03:29.323334  245311 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:03:29.323406  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:03:29.831738  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:03:30.332016  245311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:03:30.343406  245311 api_server.go:71] duration metric: took 1.020075023s to wait for apiserver process to appear ...
	I0701 23:03:30.343438  245311 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:03:30.343452  245311 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0701 23:03:30.343771  245311 api_server.go:256] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0701 23:03:30.844470  245311 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0701 23:03:28.650906  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:30.650936  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:33.149061  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:34.020047  245311 api_server.go:266] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 23:03:34.020081  245311 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 23:03:34.344589  245311 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0701 23:03:34.420560  245311 api_server.go:266] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0701 23:03:34.420595  245311 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0701 23:03:34.844017  245311 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0701 23:03:34.923461  245311 api_server.go:266] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0701 23:03:34.923512  245311 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0701 23:03:35.344054  245311 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0701 23:03:35.419453  245311 api_server.go:266] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0701 23:03:35.419552  245311 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0701 23:03:35.843950  245311 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0701 23:03:35.848388  245311 api_server.go:266] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0701 23:03:35.854232  245311 api_server.go:140] control plane version: v1.16.0
	I0701 23:03:35.854252  245311 api_server.go:130] duration metric: took 5.510807524s to wait for apiserver health ...
	I0701 23:03:35.854260  245311 cni.go:95] Creating CNI manager for ""
	I0701 23:03:35.854266  245311 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:03:35.856564  245311 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:03:33.791141  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:36.290298  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:35.858019  245311 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:03:35.861610  245311 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0701 23:03:35.861629  245311 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:03:35.874687  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:03:36.064350  245311 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:03:36.069321  245311 system_pods.go:59] 8 kube-system pods found
	I0701 23:03:36.069350  245311 system_pods.go:61] "coredns-5644d7b6d9-s46dh" [78f4d26d-6241-49b7-9290-ebecfc3a4266] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0701 23:03:36.069356  245311 system_pods.go:61] "etcd-old-k8s-version-20220701225700-10066" [b0a0d23b-4f6f-4f94-b8b1-bcf34773d444] Running
	I0701 23:03:36.069361  245311 system_pods.go:61] "kindnet-gmgzk" [0f6b7680-dbef-4a34-8e81-5e9a14db6993] Running
	I0701 23:03:36.069367  245311 system_pods.go:61] "kube-apiserver-old-k8s-version-20220701225700-10066" [c102df68-e3d8-4ea9-8baa-1ebe4bf070cf] Running
	I0701 23:03:36.069375  245311 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220701225700-10066" [8a9707ca-9f3c-444d-b651-d1b2dd0d901a] Running
	I0701 23:03:36.069387  245311 system_pods.go:61] "kube-proxy-wc2qp" [b1071924-e294-48b1-a07f-d43b5b91b2a6] Running
	I0701 23:03:36.069398  245311 system_pods.go:61] "kube-scheduler-old-k8s-version-20220701225700-10066" [af2b6138-7028-4250-a30c-7adb3ba011f7] Running
	I0701 23:03:36.069410  245311 system_pods.go:61] "storage-provisioner" [87527843-3c92-4175-b5ba-a2e3f4e67c03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0701 23:03:36.069420  245311 system_pods.go:74] duration metric: took 5.049542ms to wait for pod list to return data ...
	I0701 23:03:36.069431  245311 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:03:36.071653  245311 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:03:36.071690  245311 node_conditions.go:123] node cpu capacity is 8
	I0701 23:03:36.071701  245311 node_conditions.go:105] duration metric: took 2.263875ms to run NodePressure ...
	I0701 23:03:36.071716  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:03:36.220889  245311 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:03:36.224016  245311 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0701 23:03:36.587973  245311 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0701 23:03:37.029075  245311 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0701 23:03:37.560603  245311 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0701 23:03:35.150820  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:37.649551  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:38.290435  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:40.290509  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:42.290583  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:38.344613  245311 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0701 23:03:39.851461  245311 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0701 23:03:40.929845  245311 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0701 23:03:42.803095  245311 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0701 23:03:40.149422  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:42.150025  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:44.790297  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:47.290396  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:45.357579  245311 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0701 23:03:44.649730  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:47.149343  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:49.290852  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:51.791243  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:50.494173  245311 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0701 23:03:49.149805  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:51.650866  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:54.290711  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:56.790617  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:03:54.149456  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:56.654852  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:03:58.790654  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:01.291038  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:00.256587  245311 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0701 23:03:59.148753  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:01.149101  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:03.149574  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:03.790393  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:06.290353  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:05.650257  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:08.148867  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:08.790188  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:10.790582  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:10.149538  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:12.649608  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:13.290343  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:15.290588  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:14.650366  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:17.149423  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:17.790523  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:19.790597  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:22.290303  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:19.198109  245311 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0701 23:04:19.649543  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:21.650342  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:24.290501  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:26.790371  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:24.149385  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:26.149712  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:29.290654  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:31.790397  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:28.649560  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:30.650467  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:33.149276  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:34.290670  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:36.790398  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:34.649458  245311 kubeadm.go:777] kubelet initialised
	I0701 23:04:34.649488  245311 kubeadm.go:778] duration metric: took 58.428569716s waiting for restarted kubelet to initialise ...
	I0701 23:04:34.649497  245311 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:04:34.654769  245311 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace to be "Ready" ...
	I0701 23:04:36.664030  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:35.649313  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:37.649707  235408 pod_ready.go:102] pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:39.290427  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:41.291233  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:39.163636  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:41.165297  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:40.144200  235408 pod_ready.go:81] duration metric: took 4m0.307543952s waiting for pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace to be "Ready" ...
	E0701 23:04:40.144229  235408 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-nss5q" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:04:40.144249  235408 pod_ready.go:38] duration metric: took 4m13.234365476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:04:40.144274  235408 kubeadm.go:630] restartCluster took 4m24.910305307s
	W0701 23:04:40.144417  235408 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:04:40.144451  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:04:42.654091  235408 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.509624301s)
	I0701 23:04:42.654152  235408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:04:42.664036  235408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:04:42.670974  235408 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:04:42.671017  235408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:04:42.677597  235408 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:04:42.677642  235408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:04:43.790939  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:46.290653  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:43.664525  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:46.164397  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:51.328420  235408 out.go:204]   - Generating certificates and keys ...
	I0701 23:04:51.331061  235408 out.go:204]   - Booting up control plane ...
	I0701 23:04:51.333525  235408 out.go:204]   - Configuring RBAC rules ...
	I0701 23:04:51.335854  235408 cni.go:95] Creating CNI manager for ""
	I0701 23:04:51.335870  235408 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:04:51.337611  235408 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:04:48.790068  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:51.291112  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:48.164757  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:50.663625  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:52.664331  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:51.339046  235408 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:04:51.343119  235408 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:04:51.343134  235408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:04:51.359986  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:04:52.107501  235408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:04:52.107569  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:52.107571  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=embed-certs-20220701225830-10066 minikube.k8s.io/updated_at=2022_07_01T23_04_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:52.114177  235408 ops.go:34] apiserver oom_adj: -16
	I0701 23:04:52.168983  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:52.765755  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:53.265350  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:53.790558  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:56.290612  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:04:55.167013  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:57.664423  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:53.765158  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:54.265536  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:54.765904  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:55.266103  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:55.766202  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:56.266179  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:56.765418  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:57.266043  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:57.765244  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:58.265558  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:58.291143  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:00.790613  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:00.164098  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:05:02.663706  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:04:58.765526  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:59.266139  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:04:59.765390  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:00.265922  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:00.765974  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:01.265625  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:01.765737  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:02.265515  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:02.765328  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:03.265324  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:03.765257  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:04.265446  235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:05:04.338218  235408 kubeadm.go:1045] duration metric: took 12.230707856s to wait for elevateKubeSystemPrivileges.
	I0701 23:05:04.338252  235408 kubeadm.go:397] StartCluster complete in 4m49.146582027s
	I0701 23:05:04.338275  235408 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:05:04.338398  235408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:05:04.340223  235408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:05:04.855599  235408 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220701225830-10066" rescaled to 1
	I0701 23:05:04.855656  235408 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:05:04.855700  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:05:04.858196  235408 out.go:177] * Verifying Kubernetes components...
	I0701 23:05:04.855755  235408 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0701 23:05:04.855891  235408 config.go:178] Loaded profile config "embed-certs-20220701225830-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:05:04.859405  235408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:05:04.859435  235408 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220701225830-10066"
	I0701 23:05:04.859453  235408 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220701225830-10066"
	I0701 23:05:04.859461  235408 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220701225830-10066"
	I0701 23:05:04.859464  235408 addons.go:65] Setting dashboard=true in profile "embed-certs-20220701225830-10066"
	I0701 23:05:04.859471  235408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220701225830-10066"
	I0701 23:05:04.859481  235408 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220701225830-10066"
	I0701 23:05:04.859500  235408 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220701225830-10066"
	W0701 23:05:04.859508  235408 addons.go:162] addon metrics-server should already be in state true
	I0701 23:05:04.859485  235408 addons.go:153] Setting addon dashboard=true in "embed-certs-20220701225830-10066"
	I0701 23:05:04.859563  235408 host.go:66] Checking if "embed-certs-20220701225830-10066" exists ...
	W0701 23:05:04.859570  235408 addons.go:162] addon dashboard should already be in state true
	I0701 23:05:04.859619  235408 host.go:66] Checking if "embed-certs-20220701225830-10066" exists ...
	I0701 23:05:04.859848  235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
	W0701 23:05:04.859469  235408 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:05:04.859945  235408 host.go:66] Checking if "embed-certs-20220701225830-10066" exists ...
	I0701 23:05:04.860040  235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
	I0701 23:05:04.860092  235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
	I0701 23:05:04.860352  235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
	I0701 23:05:04.909120  235408 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:05:04.910622  235408 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:05:04.910667  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:05:04.910732  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:05:04.910634  235408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:05:04.912602  235408 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:05:04.912622  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:05:04.912669  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:05:04.914708  235408 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:05:04.915571  235408 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220701225830-10066"
	W0701 23:05:04.916302  235408 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:05:04.917660  235408 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:05:04.916327  235408 host.go:66] Checking if "embed-certs-20220701225830-10066" exists ...
	I0701 23:05:03.290473  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:05.290586  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:04.918912  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:05:04.918935  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:05:04.918022  235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
	I0701 23:05:04.918980  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:05:04.938296  235408 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220701225830-10066" to be "Ready" ...
	I0701 23:05:04.938621  235408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:05:04.951256  235408 node_ready.go:49] node "embed-certs-20220701225830-10066" has status "Ready":"True"
	I0701 23:05:04.951281  235408 node_ready.go:38] duration metric: took 12.948859ms waiting for node "embed-certs-20220701225830-10066" to be "Ready" ...
	I0701 23:05:04.951291  235408 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:05:04.960606  235408 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-92847" in "kube-system" namespace to be "Ready" ...
	I0701 23:05:04.965657  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:05:04.973100  235408 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:05:04.973128  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:05:04.973175  235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
	I0701 23:05:04.980713  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:05:04.989826  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:05:05.015090  235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
	I0701 23:05:05.133178  235408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:05:05.134357  235408 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:05:05.134383  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:05:05.135349  235408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:05:05.149235  235408 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:05:05.149257  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:05:05.227888  235408 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:05:05.227916  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:05:05.245515  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:05:05.245541  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:05:05.320671  235408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:05:05.339055  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:05:05.339146  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:05:05.431737  235408 start.go:809] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0701 23:05:05.432344  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:05:05.432369  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:05:05.536922  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:05:05.536949  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:05:05.620921  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:05:05.620951  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:05:05.638448  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:05:05.638474  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:05:05.656653  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:05:05.656685  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:05:05.742169  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:05:05.742201  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:05:05.833509  235408 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:05:05.833538  235408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:05:05.923171  235408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:05:06.343816  235408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023093925s)
	I0701 23:05:06.343855  235408 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220701225830-10066"
	I0701 23:05:06.971621  235408 pod_ready.go:102] pod "coredns-6d4b75cb6d-92847" in "kube-system" namespace has status "Ready":"False"
	I0701 23:05:07.418036  235408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.494765907s)
	I0701 23:05:07.420030  235408 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0701 23:05:04.664454  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:05:07.168344  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:05:07.421388  235408 addons.go:414] enableAddons completed in 2.565647158s
	I0701 23:05:07.790365  239469 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:05:08.792772  239469 node_ready.go:38] duration metric: took 4m0.009239027s waiting for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:05:08.795025  239469 out.go:177] 
	W0701 23:05:08.796589  239469 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:05:08.796607  239469 out.go:239] * 
	W0701 23:05:08.797333  239469 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:05:08.798802  239469 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	cb48669a69f64       6fb66cd78abfe       About a minute ago   Running             kindnet-cni               1                   4386b73a1f791
	c11940cc2c8ec       6fb66cd78abfe       4 minutes ago        Exited              kindnet-cni               0                   4386b73a1f791
	b63fba32c68cc       a634548d10b03       4 minutes ago        Running             kube-proxy                0                   8996b4de19f2f
	50e0bf3dbb8c1       aebe758cef4cd       4 minutes ago        Running             etcd                      0                   0e8080292fce1
	f41d2b7f1a0c9       34cdf99b1bb3b       4 minutes ago        Running             kube-controller-manager   0                   42cd5575a78ac
	a349e45d95bb6       d3377ffb7177c       4 minutes ago        Running             kube-apiserver            0                   b016974955465
	042166814f4c8       5d725196c1f47       4 minutes ago        Running             kube-scheduler            0                   a077b2a3977f0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 23:00:41 UTC, end at Fri 2022-07-01 23:05:09 UTC. --
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.247708302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.247718115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.247896864Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32 pid=1714 runtime=io.containerd.runc.v2
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.248793482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.248881925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.248895815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.249093789Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8996b4de19f2f8559dc1935bfb5f8ac30f57b8d6c1962531d6eee17add264ce8 pid=1723 runtime=io.containerd.runc.v2
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.307449311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qg5j2,Uid:c67a38f9-ae75-40ea-8992-85a437368c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"8996b4de19f2f8559dc1935bfb5f8ac30f57b8d6c1962531d6eee17add264ce8\""
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.310349850Z" level=info msg="CreateContainer within sandbox \"8996b4de19f2f8559dc1935bfb5f8ac30f57b8d6c1962531d6eee17add264ce8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.326712165Z" level=info msg="CreateContainer within sandbox \"8996b4de19f2f8559dc1935bfb5f8ac30f57b8d6c1962531d6eee17add264ce8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7\""
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.327300482Z" level=info msg="StartContainer for \"b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7\""
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.378194801Z" level=info msg="StartContainer for \"b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7\" returns successfully"
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.534698053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-49h72,Uid:bee4a070-eb2f-45af-a824-f8ebb08e21cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\""
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.537305366Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.560396374Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a\""
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.561003568Z" level=info msg="StartContainer for \"c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a\""
	Jul 01 23:01:08 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:01:08.738806325Z" level=info msg="StartContainer for \"c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a\" returns successfully"
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:03:49.286277778Z" level=info msg="shim disconnected" id=c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:03:49.286343509Z" level=warning msg="cleaning up after shim disconnected" id=c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a namespace=k8s.io
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:03:49.286362757Z" level=info msg="cleaning up dead shim"
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:03:49.296224580Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:03:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2112 runtime=io.containerd.runc.v2\n"
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:03:49.328349667Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:03:49.342295023Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452\""
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:03:49.342774956Z" level=info msg="StartContainer for \"cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452\""
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:03:49.521312990Z" level=info msg="StartContainer for \"cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220701230032-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220701230032-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T23_00_55_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 23:00:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220701230032-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:05:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:01:05 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:01:05 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:01:05 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:01:05 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220701230032-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                674fca36-2ebb-426c-b65b-bd78bdb510f5
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220701230032-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 kindnet-49h72                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220701230032-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220701230032-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-qg5j2                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220701230032-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  Starting                 4m15s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s  kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s  kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s  kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s   node-controller  Node default-k8s-different-port-20220701230032-10066 event: Registered Node default-k8s-different-port-20220701230032-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52] <==
	* {"level":"info","ts":"2022-07-01T23:00:48.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-01T23:00:48.721Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-01T23:00:48.724Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-01T23:00:48.724Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-01T23:00:48.725Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-01T23:00:48.724Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-01T23:00:48.724Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220701230032-10066 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.355Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-01T23:00:49.355Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  23:05:10 up 47 min,  0 users,  load average: 1.71, 2.13, 2.20
	Linux default-k8s-different-port-20220701230032-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2] <==
	* I0701 23:00:52.122528       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 23:00:52.122739       1 cache.go:39] Caches are synced for autoregister controller
	I0701 23:00:52.122766       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0701 23:00:52.126063       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0701 23:00:52.126674       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0701 23:00:52.138004       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 23:00:52.142988       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0701 23:00:52.766811       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 23:00:53.027362       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0701 23:00:53.030604       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0701 23:00:53.030622       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 23:00:53.389140       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 23:00:53.433846       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 23:00:53.563524       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0701 23:00:53.568204       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0701 23:00:53.569200       1 controller.go:611] quota admission added evaluator for: endpoints
	I0701 23:00:53.572998       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 23:00:54.150574       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0701 23:00:54.803526       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0701 23:00:54.810011       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0701 23:00:54.817885       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0701 23:00:54.923302       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 23:01:07.657482       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0701 23:01:07.805142       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0701 23:01:08.440379       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c] <==
	* I0701 23:01:06.998693       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0701 23:01:06.999829       1 shared_informer.go:262] Caches are synced for service account
	I0701 23:01:07.000984       1 shared_informer.go:262] Caches are synced for PV protection
	I0701 23:01:07.002759       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0701 23:01:07.010484       1 shared_informer.go:262] Caches are synced for expand
	I0701 23:01:07.021740       1 shared_informer.go:262] Caches are synced for stateful set
	I0701 23:01:07.048510       1 shared_informer.go:262] Caches are synced for disruption
	I0701 23:01:07.048533       1 disruption.go:371] Sending events to api server.
	I0701 23:01:07.050704       1 shared_informer.go:262] Caches are synced for daemon sets
	I0701 23:01:07.154573       1 shared_informer.go:262] Caches are synced for attach detach
	I0701 23:01:07.172619       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0701 23:01:07.198444       1 shared_informer.go:262] Caches are synced for endpoint
	I0701 23:01:07.207329       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 23:01:07.226645       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 23:01:07.255294       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0701 23:01:07.625587       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 23:01:07.659581       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0701 23:01:07.683577       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 23:01:07.683598       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0701 23:01:07.810761       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-49h72"
	I0701 23:01:07.812355       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qg5j2"
	I0701 23:01:08.007720       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-j7d7h"
	I0701 23:01:08.013547       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zmnqs"
	I0701 23:01:08.206059       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0701 23:01:08.211257       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-j7d7h"
	
	* 
	* ==> kube-proxy [b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7] <==
	* I0701 23:01:08.413673       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0701 23:01:08.413740       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0701 23:01:08.413778       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 23:01:08.436458       1 server_others.go:206] "Using iptables Proxier"
	I0701 23:01:08.436499       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 23:01:08.436509       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 23:01:08.436529       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 23:01:08.436562       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:01:08.436755       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:01:08.437083       1 server.go:661] "Version info" version="v1.24.2"
	I0701 23:01:08.437106       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 23:01:08.437672       1 config.go:226] "Starting endpoint slice config controller"
	I0701 23:01:08.437701       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 23:01:08.438341       1 config.go:317] "Starting service config controller"
	I0701 23:01:08.438370       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 23:01:08.438585       1 config.go:444] "Starting node config controller"
	I0701 23:01:08.438745       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 23:01:08.538588       1 shared_informer.go:262] Caches are synced for service config
	I0701 23:01:08.538607       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 23:01:08.539109       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c] <==
	* W0701 23:00:52.118488       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:52.119392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:52.119370       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 23:00:52.119451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 23:00:52.119361       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:52.119469       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:52.118588       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 23:00:52.119594       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 23:00:52.965924       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 23:00:52.965973       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 23:00:52.981058       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 23:00:52.981091       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 23:00:53.008284       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:00:53.008567       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 23:00:53.024485       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 23:00:53.024517       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 23:00:53.118081       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 23:00:53.118128       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 23:00:53.118261       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 23:00:53.118301       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 23:00:53.171211       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:53.171246       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:53.218071       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 23:00:53.218112       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0701 23:00:55.254285       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 23:00:41 UTC, end at Fri 2022-07-01 23:05:10 UTC. --
	Jul 01 23:03:10 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:10.195858    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:15 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:15.196932    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:20 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:20.198172    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:25 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:25.199807    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:30 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:30.200904    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:35 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:35.202261    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:40 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:40.203113    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:45 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:45.204312    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:49 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:03:49.326296    1325 scope.go:110] "RemoveContainer" containerID="c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a"
	Jul 01 23:03:50 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:50.205339    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:03:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:03:55.205842    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:00 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:00.207050    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:05 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:05.208132    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:10 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:10.209328    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:15 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:15.211110    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:20 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:20.212616    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:25 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:25.214047    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:30 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:30.215514    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:35 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:35.216174    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:40 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:40.218013    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:45 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:45.219402    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:50 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:50.220844    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:04:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:04:55.222388    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:05:00 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:05:00.223951    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:05:05 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:05:05.225150    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-zmnqs storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-zmnqs storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-zmnqs storage-provisioner: exit status 1 (49.559906ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-zmnqs" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-zmnqs storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (278.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (484.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5c54c8f3-466e-4b60-b1b0-28e9aa7528c5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0701 23:02:03.432700   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:05.391522   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:05.993163   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:08.177733   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:11.114015   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:13.779301   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:02:21.355122   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:23.385880   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:02:28.657960   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:02:32.035078   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: ***** TestStartStop/group/no-preload/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
start_stop_delete_test.go:196: TestStartStop/group/no-preload/serial/DeployApp: showing logs for failed pods as of 2022-07-01 23:10:03.56258801 +0000 UTC m=+2792.867525527
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context no-preload-20220701225718-10066 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9hcs (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-r9hcs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  2m45s (x2 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 logs busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context no-preload-20220701225718-10066 logs busybox -n default:
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220701225718-10066
helpers_test.go:235: (dbg) docker inspect no-preload-20220701225718-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff",
	        "Created": "2022-07-01T22:57:20.298940328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T22:57:20.663867782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hostname",
	        "HostsPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hosts",
	        "LogPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff-json.log",
	        "Name": "/no-preload-20220701225718-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220701225718-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220701225718-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220701225718-10066",
	                "Source": "/var/lib/docker/volumes/no-preload-20220701225718-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220701225718-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "name.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "865eb475db627936c6b81b6e3b702ce9e018b17349e5ddb5dde9edb749dbced7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49402"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49399"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/865eb475db62",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220701225718-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6714999bf303",
	                        "no-preload-20220701225718-10066"
	                    ],
	                    "NetworkID": "1edec7b6219d6237636ff26267a26187f0ef2e748e4635b07760f0d37cc8596c",
	                    "EndpointID": "0377a99704388e0f2c261b850c52bf87fff4b394cc37a39d49723586e5d2f940",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220701225718-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | kubernetes-upgrade-20220701225105-10066                    |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | disable-driver-mounts-20220701230032-10066                 |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:02 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC |                     |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |          |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:06 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:06:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:06:34.414097  258995 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:06:34.414279  258995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:06:34.414290  258995 out.go:309] Setting ErrFile to fd 2...
	I0701 23:06:34.414298  258995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:06:34.414739  258995 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:06:34.414979  258995 out.go:303] Setting JSON to false
	I0701 23:06:34.416593  258995 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2947,"bootTime":1656713847,"procs":738,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:06:34.416656  258995 start.go:125] virtualization: kvm guest
	I0701 23:06:34.419067  258995 out.go:177] * [newest-cni-20220701230537-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:06:34.420633  258995 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:06:34.420662  258995 notify.go:193] Checking for updates...
	I0701 23:06:34.422188  258995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:06:34.423704  258995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:06:34.425146  258995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:06:34.426591  258995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:06:34.428291  258995 config.go:178] Loaded profile config "newest-cni-20220701230537-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:06:34.428771  258995 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:06:34.470638  258995 docker.go:137] docker version: linux-20.10.17
	I0701 23:06:34.470753  258995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:06:34.579013  258995 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-01 23:06:34.501366324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:06:34.579159  258995 docker.go:254] overlay module found
	I0701 23:06:34.581456  258995 out.go:177] * Using the docker driver based on existing profile
	I0701 23:06:34.582890  258995 start.go:284] selected driver: docker
	I0701 23:06:34.582907  258995 start.go:808] validating driver "docker" against &{Name:newest-cni-20220701230537-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220701230537-10066 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false defaul
t_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:06:34.583008  258995 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:06:34.583884  258995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:06:34.690709  258995 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-01 23:06:34.615635547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:06:34.690944  258995 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0701 23:06:34.690963  258995 cni.go:95] Creating CNI manager for ""
	I0701 23:06:34.690974  258995 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:06:34.690988  258995 start_flags.go:310] config:
	{Name:newest-cni-20220701230537-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220701230537-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHost
Timeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:06:34.692981  258995 out.go:177] * Starting control plane node newest-cni-20220701230537-10066 in cluster newest-cni-20220701230537-10066
	I0701 23:06:34.694495  258995 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:06:34.695843  258995 out.go:177] * Pulling base image ...
	I0701 23:06:34.697133  258995 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:06:34.697166  258995 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:06:34.697170  258995 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:06:34.697173  258995 cache.go:57] Caching tarball of preloaded images
	I0701 23:06:34.697459  258995 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:06:34.697487  258995 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:06:34.697601  258995 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/config.json ...
	I0701 23:06:34.730986  258995 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:06:34.731010  258995 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:06:34.731023  258995 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:06:34.731054  258995 start.go:352] acquiring machines lock for newest-cni-20220701230537-10066: {Name:mk09082a8962197bf1403d5caed70fa1c313958d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:06:34.731127  258995 start.go:356] acquired machines lock for "newest-cni-20220701230537-10066" in 55.863µs
	I0701 23:06:34.731145  258995 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:06:34.731149  258995 fix.go:55] fixHost starting: 
	I0701 23:06:34.731361  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:06:34.765033  258995 fix.go:103] recreateIfNeeded on newest-cni-20220701230537-10066: state=Stopped err=<nil>
	W0701 23:06:34.765086  258995 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:06:34.767404  258995 out.go:177] * Restarting existing docker container for "newest-cni-20220701230537-10066" ...
	I0701 23:06:34.165114  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:36.663648  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:34.768673  258995 cli_runner.go:164] Run: docker start newest-cni-20220701230537-10066
	I0701 23:06:35.162994  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:06:35.200123  258995 kic.go:416] container "newest-cni-20220701230537-10066" state is running.
	I0701 23:06:35.200495  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220701230537-10066
	I0701 23:06:35.234865  258995 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/config.json ...
	I0701 23:06:35.235114  258995 machine.go:88] provisioning docker machine ...
	I0701 23:06:35.235145  258995 ubuntu.go:169] provisioning hostname "newest-cni-20220701230537-10066"
	I0701 23:06:35.235200  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:35.268767  258995 main.go:134] libmachine: Using SSH client type: native
	I0701 23:06:35.268976  258995 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0701 23:06:35.269001  258995 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220701230537-10066 && echo "newest-cni-20220701230537-10066" | sudo tee /etc/hostname
	I0701 23:06:35.269725  258995 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51796->127.0.0.1:49432: read: connection reset by peer
	I0701 23:06:38.394868  258995 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220701230537-10066
	
	I0701 23:06:38.394942  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:38.430841  258995 main.go:134] libmachine: Using SSH client type: native
	I0701 23:06:38.431017  258995 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0701 23:06:38.431050  258995 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220701230537-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220701230537-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220701230537-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:06:38.546040  258995 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:06:38.546068  258995 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:06:38.546099  258995 ubuntu.go:177] setting up certificates
	I0701 23:06:38.546106  258995 provision.go:83] configureAuth start
	I0701 23:06:38.546147  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220701230537-10066
	I0701 23:06:38.580939  258995 provision.go:138] copyHostCerts
	I0701 23:06:38.581009  258995 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:06:38.581028  258995 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:06:38.581116  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:06:38.581229  258995 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:06:38.581242  258995 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:06:38.581284  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:06:38.581355  258995 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:06:38.581367  258995 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:06:38.581403  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:06:38.581462  258995 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220701230537-10066 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220701230537-10066]
	I0701 23:06:38.757827  258995 provision.go:172] copyRemoteCerts
	I0701 23:06:38.757890  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:06:38.757937  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:38.792195  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:38.881702  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:06:38.899024  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0701 23:06:38.915653  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 23:06:38.932402  258995 provision.go:86] duration metric: configureAuth took 386.287684ms
	I0701 23:06:38.932423  258995 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:06:38.932594  258995 config.go:178] Loaded profile config "newest-cni-20220701230537-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:06:38.932607  258995 machine.go:91] provisioned docker machine in 3.697475646s
	I0701 23:06:38.932616  258995 start.go:306] post-start starting for "newest-cni-20220701230537-10066" (driver="docker")
	I0701 23:06:38.932622  258995 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:06:38.932671  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:06:38.932715  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:38.969297  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:39.054415  258995 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:06:39.057045  258995 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:06:39.057067  258995 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:06:39.057075  258995 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:06:39.057081  258995 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:06:39.057089  258995 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:06:39.057136  258995 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:06:39.057208  258995 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:06:39.057285  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:06:39.063810  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:06:39.081005  258995 start.go:309] post-start completed in 148.379288ms
	I0701 23:06:39.081068  258995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:06:39.081108  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:39.114447  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:39.194921  258995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:06:39.199018  258995 fix.go:57] fixHost completed within 4.467863189s
	I0701 23:06:39.199040  258995 start.go:81] releasing machines lock for "newest-cni-20220701230537-10066", held for 4.467899434s
	I0701 23:06:39.199121  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220701230537-10066
	I0701 23:06:39.234497  258995 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:06:39.234517  258995 ssh_runner.go:195] Run: systemctl --version
	I0701 23:06:39.234595  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:39.234601  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:39.269988  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:39.270685  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:39.372455  258995 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:06:39.383372  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:06:39.392509  258995 docker.go:179] disabling docker service ...
	I0701 23:06:39.392563  258995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:06:39.401779  258995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:06:39.410110  258995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:06:38.664279  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:40.664458  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:39.492683  258995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:06:39.567308  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:06:39.575996  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:06:39.588820  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:06:39.596772  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:06:39.604536  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:06:39.612557  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:06:39.620178  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:06:39.627399  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:06:39.640030  258995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:06:39.646248  258995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:06:39.652664  258995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:06:39.723703  258995 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:06:39.793613  258995 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:06:39.793683  258995 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:06:39.797185  258995 start.go:471] Will wait 60s for crictl version
	I0701 23:06:39.797234  258995 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:06:39.823490  258995 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:06:39Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:06:43.164221  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:45.164804  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:47.664048  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:50.870686  258995 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:06:50.894066  258995 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:06:50.894125  258995 ssh_runner.go:195] Run: containerd --version
	I0701 23:06:50.922238  258995 ssh_runner.go:195] Run: containerd --version
	I0701 23:06:50.953006  258995 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:06:50.954593  258995 cli_runner.go:164] Run: docker network inspect newest-cni-20220701230537-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:06:50.988248  258995 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0701 23:06:50.991464  258995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:06:51.002170  258995 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0701 23:06:49.664251  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:52.164105  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:51.003471  258995 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:06:51.003534  258995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:06:51.026300  258995 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:06:51.026322  258995 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:06:51.026369  258995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:06:51.049194  258995 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:06:51.049211  258995 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:06:51.049251  258995 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:06:51.072512  258995 cni.go:95] Creating CNI manager for ""
	I0701 23:06:51.072531  258995 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:06:51.072542  258995 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0701 23:06:51.072554  258995 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220701230537-10066 NodeName:newest-cni-20220701230537-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-ele
ct:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:06:51.072676  258995 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220701230537-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:06:51.072751  258995 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220701230537-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220701230537-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 23:06:51.072793  258995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:06:51.079832  258995 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:06:51.079884  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:06:51.086363  258995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (561 bytes)
	I0701 23:06:51.098788  258995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:06:51.111342  258995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2196 bytes)
	I0701 23:06:51.123722  258995 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:06:51.126643  258995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:06:51.135317  258995 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066 for IP: 192.168.67.2
	I0701 23:06:51.135416  258995 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:06:51.135484  258995 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:06:51.135580  258995 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/client.key
	I0701 23:06:51.135648  258995 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/apiserver.key.c7fa3a9e
	I0701 23:06:51.135702  258995 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/proxy-client.key
	I0701 23:06:51.135842  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:06:51.135889  258995 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:06:51.135906  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:06:51.135941  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:06:51.135976  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:06:51.136009  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:06:51.136063  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:06:51.136808  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:06:51.153739  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:06:51.171043  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:06:51.187876  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0701 23:06:51.205092  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:06:51.222443  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:06:51.239190  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:06:51.255686  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:06:51.272278  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:06:51.288886  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:06:51.305646  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:06:51.321574  258995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:06:51.333572  258995 ssh_runner.go:195] Run: openssl version
	I0701 23:06:51.337924  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:06:51.344691  258995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:06:51.347642  258995 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:06:51.347698  258995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:06:51.352115  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:06:51.358296  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:06:51.365046  258995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:06:51.367828  258995 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:06:51.367863  258995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:06:51.372300  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:06:51.378814  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:06:51.385751  258995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:06:51.388493  258995 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:06:51.388532  258995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:06:51.393152  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:06:51.399411  258995 kubeadm.go:395] StartCluster: {Name:newest-cni-20220701230537-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220701230537-10066 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:fals
e kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:06:51.399484  258995 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:06:51.399513  258995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:06:51.423320  258995 cri.go:87] found id: "15e59c91bb35a121a6064d222855fdf8348aa49db52a0402e51c98a37db31217"
	I0701 23:06:51.423349  258995 cri.go:87] found id: "ab56834fe0ef673390b371b4f10026a71c82af260e19d3cb94a92d31ab2ab128"
	I0701 23:06:51.423358  258995 cri.go:87] found id: "e704eecd1e08fe666557c22416eea4b800bdb008c82897995c74cbca68f3dc25"
	I0701 23:06:51.423364  258995 cri.go:87] found id: "c99e390f96da42151f2fad7726b93e164928077e543aada0e5236cfc43666e39"
	I0701 23:06:51.423370  258995 cri.go:87] found id: "e725efc4c7a224927581ed2f09c7cf2d7df8a546cf0b0bc29620d113ee10791e"
	I0701 23:06:51.423376  258995 cri.go:87] found id: "150015fbfd6f8938126af2269f26d19d16e82786e8d6b2b5f0b62657b7a03a80"
	I0701 23:06:51.423382  258995 cri.go:87] found id: ""
	I0701 23:06:51.423411  258995 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:06:51.435450  258995 cri.go:114] JSON = null
	W0701 23:06:51.435489  258995 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:06:51.435526  258995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:06:51.441918  258995 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:06:51.441934  258995 kubeadm.go:626] restartCluster start
	I0701 23:06:51.441964  258995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:06:51.448643  258995 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:51.449685  258995 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220701230537-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:06:51.450233  258995 kubeconfig.go:127] "newest-cni-20220701230537-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:06:51.451274  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:06:51.452754  258995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:06:51.459930  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:51.459971  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:51.467554  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:51.667947  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:51.668010  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:51.677081  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:51.868396  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:51.868468  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:51.877241  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.068521  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.068611  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.077586  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.267700  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.267771  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.276288  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.468495  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.468583  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.477308  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.668618  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.668753  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.677279  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.868579  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.868641  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.877568  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.067780  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.067852  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.077081  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.268390  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.268471  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.276990  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.468289  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.468362  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.476813  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.668091  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.668156  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.676949  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.868230  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.868307  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.876772  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.068055  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:54.068126  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:54.076663  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.267936  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:54.267998  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:54.276276  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.164784  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:56.663515  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:54.467844  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:54.467916  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:54.476315  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.476336  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:54.476368  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:54.484053  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.484075  258995 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:06:54.484083  258995 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:06:54.484108  258995 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:06:54.484158  258995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:06:54.506999  258995 cri.go:87] found id: "15e59c91bb35a121a6064d222855fdf8348aa49db52a0402e51c98a37db31217"
	I0701 23:06:54.507021  258995 cri.go:87] found id: "ab56834fe0ef673390b371b4f10026a71c82af260e19d3cb94a92d31ab2ab128"
	I0701 23:06:54.507030  258995 cri.go:87] found id: "e704eecd1e08fe666557c22416eea4b800bdb008c82897995c74cbca68f3dc25"
	I0701 23:06:54.507040  258995 cri.go:87] found id: "c99e390f96da42151f2fad7726b93e164928077e543aada0e5236cfc43666e39"
	I0701 23:06:54.507049  258995 cri.go:87] found id: "e725efc4c7a224927581ed2f09c7cf2d7df8a546cf0b0bc29620d113ee10791e"
	I0701 23:06:54.507063  258995 cri.go:87] found id: "150015fbfd6f8938126af2269f26d19d16e82786e8d6b2b5f0b62657b7a03a80"
	I0701 23:06:54.507076  258995 cri.go:87] found id: ""
	I0701 23:06:54.507088  258995 cri.go:232] Stopping containers: [15e59c91bb35a121a6064d222855fdf8348aa49db52a0402e51c98a37db31217 ab56834fe0ef673390b371b4f10026a71c82af260e19d3cb94a92d31ab2ab128 e704eecd1e08fe666557c22416eea4b800bdb008c82897995c74cbca68f3dc25 c99e390f96da42151f2fad7726b93e164928077e543aada0e5236cfc43666e39 e725efc4c7a224927581ed2f09c7cf2d7df8a546cf0b0bc29620d113ee10791e 150015fbfd6f8938126af2269f26d19d16e82786e8d6b2b5f0b62657b7a03a80]
	I0701 23:06:54.507133  258995 ssh_runner.go:195] Run: which crictl
	I0701 23:06:54.509739  258995 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 15e59c91bb35a121a6064d222855fdf8348aa49db52a0402e51c98a37db31217 ab56834fe0ef673390b371b4f10026a71c82af260e19d3cb94a92d31ab2ab128 e704eecd1e08fe666557c22416eea4b800bdb008c82897995c74cbca68f3dc25 c99e390f96da42151f2fad7726b93e164928077e543aada0e5236cfc43666e39 e725efc4c7a224927581ed2f09c7cf2d7df8a546cf0b0bc29620d113ee10791e 150015fbfd6f8938126af2269f26d19d16e82786e8d6b2b5f0b62657b7a03a80
	I0701 23:06:54.533856  258995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:06:54.543470  258995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:06:54.550132  258995 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 23:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul  1 23:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul  1 23:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul  1 23:05 /etc/kubernetes/scheduler.conf
	
	I0701 23:06:54.550194  258995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 23:06:54.557308  258995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 23:06:54.563693  258995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 23:06:54.570164  258995 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.570204  258995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:06:54.576255  258995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 23:06:54.582576  258995 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.582612  258995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:06:54.588536  258995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:06:54.594900  258995 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:06:54.594920  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:54.638253  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:55.351096  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:55.538288  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:55.589369  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:55.726097  258995 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:06:55.726152  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:06:56.235342  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:06:56.735369  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:06:56.747981  258995 api_server.go:71] duration metric: took 1.02187943s to wait for apiserver process to appear ...
	I0701 23:06:56.748013  258995 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:06:56.748027  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:06:56.748455  258995 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0701 23:06:57.249176  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:06:59.872070  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 23:06:59.872161  258995 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 23:07:00.249349  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:07:00.254757  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:07:00.254789  258995 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:07:00.749355  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:07:00.753608  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:07:00.753630  258995 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:07:01.249222  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:07:01.253956  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0701 23:07:01.260004  258995 api_server.go:140] control plane version: v1.24.2
	I0701 23:07:01.260026  258995 api_server.go:130] duration metric: took 4.512008107s to wait for apiserver health ...
	I0701 23:07:01.260035  258995 cni.go:95] Creating CNI manager for ""
	I0701 23:07:01.260041  258995 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:07:01.262034  258995 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:07:01.263670  258995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:07:01.267622  258995 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:07:01.267646  258995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:07:01.282196  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:07:02.149306  258995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:07:02.155786  258995 system_pods.go:59] 9 kube-system pods found
	I0701 23:07:02.155815  258995 system_pods.go:61] "coredns-6d4b75cb6d-qgxl7" [80686edc-f7b3-4be5-a9d0-91b187b1b0bc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.155823  258995 system_pods.go:61] "etcd-newest-cni-20220701230537-10066" [7a5e1e12-f631-485a-8324-e3e2a26c67c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:07:02.155830  258995 system_pods.go:61] "kindnet-gj46g" [f7c7e015-c6c8-4e86-b1c3-de4ed4f1ea38] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:07:02.155836  258995 system_pods.go:61] "kube-apiserver-newest-cni-20220701230537-10066" [8960b5c1-6baa-444c-94f9-9d7f32b4a545] Running
	I0701 23:07:02.155845  258995 system_pods.go:61] "kube-controller-manager-newest-cni-20220701230537-10066" [e083017e-b2be-407f-b065-b42aae13b35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:07:02.155855  258995 system_pods.go:61] "kube-proxy-xgmtt" [6c60e6df-47d0-4ac9-9540-87afae30a047] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 23:07:02.155865  258995 system_pods.go:61] "kube-scheduler-newest-cni-20220701230537-10066" [42a3efda-99c0-4cb6-bf0f-25fd8457f229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0701 23:07:02.155872  258995 system_pods.go:61] "metrics-server-5c6f97fb75-jlkjx" [67a9c41d-57b7-47e2-b9c2-ec9787a1f3d2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.155878  258995 system_pods.go:61] "storage-provisioner" [0de651df-edaa-428d-96f4-1c501a24c13d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.155883  258995 system_pods.go:74] duration metric: took 6.555514ms to wait for pod list to return data ...
	I0701 23:07:02.155890  258995 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:07:02.157924  258995 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:07:02.157948  258995 node_conditions.go:123] node cpu capacity is 8
	I0701 23:07:02.157961  258995 node_conditions.go:105] duration metric: took 2.066517ms to run NodePressure ...
	I0701 23:07:02.157980  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:07:02.317385  258995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:07:02.324748  258995 ops.go:34] apiserver oom_adj: -16
	I0701 23:07:02.324771  258995 kubeadm.go:630] restartCluster took 10.882830695s
	I0701 23:07:02.324779  258995 kubeadm.go:397] StartCluster complete in 10.925374477s
	I0701 23:07:02.324797  258995 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:07:02.324908  258995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:07:02.326175  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:07:02.332030  258995 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220701230537-10066" rescaled to 1
	I0701 23:07:02.332098  258995 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:07:02.332116  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:07:02.334668  258995 out.go:177] * Verifying Kubernetes components...
	I0701 23:07:02.332189  258995 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0701 23:07:02.332347  258995 config.go:178] Loaded profile config "newest-cni-20220701230537-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:07:02.335960  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:07:02.335984  258995 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220701230537-10066"
	I0701 23:07:02.336001  258995 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220701230537-10066"
	I0701 23:07:02.336019  258995 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220701230537-10066"
	I0701 23:07:02.336022  258995 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220701230537-10066"
	W0701 23:07:02.336032  258995 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:07:02.336053  258995 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220701230537-10066"
	W0701 23:07:02.336068  258995 addons.go:162] addon metrics-server should already be in state true
	I0701 23:07:02.336087  258995 host.go:66] Checking if "newest-cni-20220701230537-10066" exists ...
	I0701 23:07:02.336115  258995 host.go:66] Checking if "newest-cni-20220701230537-10066" exists ...
	I0701 23:07:02.336024  258995 addons.go:65] Setting dashboard=true in profile "newest-cni-20220701230537-10066"
	I0701 23:07:02.336405  258995 addons.go:153] Setting addon dashboard=true in "newest-cni-20220701230537-10066"
	W0701 23:07:02.336419  258995 addons.go:162] addon dashboard should already be in state true
	I0701 23:07:02.336456  258995 host.go:66] Checking if "newest-cni-20220701230537-10066" exists ...
	I0701 23:07:02.336598  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.336649  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.336026  258995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220701230537-10066"
	I0701 23:07:02.336889  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.337135  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.387250  258995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:07:02.388620  258995 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:07:02.389854  258995 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:07:02.389876  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:07:02.388657  258995 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:07:02.389940  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:07:02.390008  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:07:02.391427  258995 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:07:02.389915  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:07:02.392909  258995 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:06:58.664292  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:01.164567  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:02.394340  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:07:02.394362  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:07:02.394411  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:07:02.397797  258995 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220701230537-10066"
	W0701 23:07:02.397823  258995 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:07:02.397849  258995 host.go:66] Checking if "newest-cni-20220701230537-10066" exists ...
	I0701 23:07:02.398360  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.418340  258995 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0701 23:07:02.418351  258995 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:07:02.418416  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:07:02.432659  258995 api_server.go:71] duration metric: took 100.522964ms to wait for apiserver process to appear ...
	I0701 23:07:02.432716  258995 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:07:02.432731  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:07:02.438173  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0701 23:07:02.439473  258995 api_server.go:140] control plane version: v1.24.2
	I0701 23:07:02.439496  258995 api_server.go:130] duration metric: took 6.77174ms to wait for apiserver health ...
	I0701 23:07:02.439506  258995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:07:02.440178  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:07:02.443373  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:07:02.446379  258995 system_pods.go:59] 9 kube-system pods found
	I0701 23:07:02.446414  258995 system_pods.go:61] "coredns-6d4b75cb6d-qgxl7" [80686edc-f7b3-4be5-a9d0-91b187b1b0bc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.446427  258995 system_pods.go:61] "etcd-newest-cni-20220701230537-10066" [7a5e1e12-f631-485a-8324-e3e2a26c67c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:07:02.446443  258995 system_pods.go:61] "kindnet-gj46g" [f7c7e015-c6c8-4e86-b1c3-de4ed4f1ea38] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:07:02.446462  258995 system_pods.go:61] "kube-apiserver-newest-cni-20220701230537-10066" [8960b5c1-6baa-444c-94f9-9d7f32b4a545] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0701 23:07:02.446483  258995 system_pods.go:61] "kube-controller-manager-newest-cni-20220701230537-10066" [e083017e-b2be-407f-b065-b42aae13b35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:07:02.446500  258995 system_pods.go:61] "kube-proxy-xgmtt" [6c60e6df-47d0-4ac9-9540-87afae30a047] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 23:07:02.446517  258995 system_pods.go:61] "kube-scheduler-newest-cni-20220701230537-10066" [42a3efda-99c0-4cb6-bf0f-25fd8457f229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0701 23:07:02.446533  258995 system_pods.go:61] "metrics-server-5c6f97fb75-jlkjx" [67a9c41d-57b7-47e2-b9c2-ec9787a1f3d2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.446560  258995 system_pods.go:61] "storage-provisioner" [0de651df-edaa-428d-96f4-1c501a24c13d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.446569  258995 system_pods.go:74] duration metric: took 7.055605ms to wait for pod list to return data ...
	I0701 23:07:02.446579  258995 default_sa.go:34] waiting for default service account to be created ...
	I0701 23:07:02.448805  258995 default_sa.go:45] found service account: "default"
	I0701 23:07:02.448826  258995 default_sa.go:55] duration metric: took 2.239766ms for default service account to be created ...
	I0701 23:07:02.448838  258995 kubeadm.go:572] duration metric: took 116.706896ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0701 23:07:02.448863  258995 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:07:02.451280  258995 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:07:02.451299  258995 node_conditions.go:123] node cpu capacity is 8
	I0701 23:07:02.451308  258995 node_conditions.go:105] duration metric: took 2.440834ms to run NodePressure ...
	I0701 23:07:02.451317  258995 start.go:216] waiting for startup goroutines ...
	I0701 23:07:02.453174  258995 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:07:02.453195  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:07:02.453240  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:07:02.455269  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:07:02.490429  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:07:02.541921  258995 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:07:02.541948  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:07:02.544002  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:07:02.548391  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:07:02.548414  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:07:02.556595  258995 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:07:02.556619  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:07:02.563079  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:07:02.563100  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:07:02.571062  258995 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:07:02.571085  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:07:02.621276  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:07:02.621307  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:07:02.629034  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:07:02.633706  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:07:02.638778  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:07:02.638800  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:07:02.719928  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:07:02.719955  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:07:02.736919  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:07:02.736949  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:07:02.752189  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:07:02.752223  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:07:02.837060  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:07:02.837088  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:07:02.921752  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:07:02.921780  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:07:02.943594  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:07:03.163783  258995 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220701230537-10066"
	I0701 23:07:03.360662  258995 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0701 23:07:03.362046  258995 addons.go:414] enableAddons completed in 1.029861294s
	I0701 23:07:03.405602  258995 start.go:506] kubectl: 1.24.2, cluster: 1.24.2 (minor skew: 0)
	I0701 23:07:03.407966  258995 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220701230537-10066" cluster and "default" namespace by default
	I0701 23:07:03.664250  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:06.164316  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:08.164599  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:10.664063  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:12.664461  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:15.164133  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:17.164382  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:19.663632  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:21.663818  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:23.663967  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:26.164306  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:28.164930  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:30.664144  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:33.164605  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:35.663750  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:37.664339  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:40.164476  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:42.664249  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:45.164701  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:47.164787  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:49.664037  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:51.664111  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:54.164443  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:56.164925  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:58.664564  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:01.163987  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:03.164334  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:05.164460  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:07.164721  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:09.663979  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:11.664277  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:14.164029  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:16.164501  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:18.663698  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:20.664170  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:22.664334  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:25.164419  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:27.165462  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:29.663781  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:31.664057  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:34.164054  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:34.659436  245311 pod_ready.go:81] duration metric: took 4m0.00463364s waiting for pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace to be "Ready" ...
	E0701 23:08:34.659463  245311 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:08:34.659488  245311 pod_ready.go:38] duration metric: took 4m0.009979733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:08:34.659524  245311 kubeadm.go:630] restartCluster took 5m9.668121905s
	W0701 23:08:34.659777  245311 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:08:34.659816  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:08:36.863542  245311 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.203696859s)
	I0701 23:08:36.863608  245311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:08:36.872901  245311 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:08:36.879734  245311 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:08:36.879789  245311 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:08:36.886494  245311 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:08:36.886622  245311 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:08:37.247242  245311 out.go:204]   - Generating certificates and keys ...
	I0701 23:08:37.829860  245311 out.go:204]   - Booting up control plane ...
	I0701 23:08:47.875461  245311 out.go:204]   - Configuring RBAC rules ...
	I0701 23:08:48.291666  245311 cni.go:95] Creating CNI manager for ""
	I0701 23:08:48.291688  245311 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:08:48.293328  245311 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:08:48.294742  245311 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:08:48.298483  245311 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0701 23:08:48.298500  245311 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:08:48.311207  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:08:48.651857  245311 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:08:48.651944  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=old-k8s-version-20220701225700-10066 minikube.k8s.io/updated_at=2022_07_01T23_08_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:48.651945  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:48.747820  245311 ops.go:34] apiserver oom_adj: -16
	I0701 23:08:48.747905  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:49.337647  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:49.837281  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:50.338027  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:50.837775  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:51.337990  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:51.837271  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:52.337894  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:52.837228  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:53.337964  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:53.837148  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:54.337173  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:54.837440  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:55.338035  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:55.837367  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:56.337614  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:56.837152  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:57.338000  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:57.837608  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:58.337540  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:58.837358  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:59.337869  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:59.837945  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:00.337719  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:00.837142  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:01.337879  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:01.838090  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:02.337084  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:02.838073  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:02.903841  245311 kubeadm.go:1045] duration metric: took 14.251975622s to wait for elevateKubeSystemPrivileges.
	I0701 23:09:02.903872  245311 kubeadm.go:397] StartCluster complete in 5m37.955831889s
	I0701 23:09:02.903898  245311 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:09:02.904009  245311 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:09:02.905291  245311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:09:03.420619  245311 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220701225700-10066" rescaled to 1
	I0701 23:09:03.420688  245311 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:09:03.422822  245311 out.go:177] * Verifying Kubernetes components...
	I0701 23:09:03.420742  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:09:03.420783  245311 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0701 23:09:03.420998  245311 config.go:178] Loaded profile config "old-k8s-version-20220701225700-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 23:09:03.424284  245311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:09:03.424342  245311 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424360  245311 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424367  245311 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424374  245311 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424379  245311 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220701225700-10066"
	W0701 23:09:03.424383  245311 addons.go:162] addon dashboard should already be in state true
	W0701 23:09:03.424386  245311 addons.go:162] addon metrics-server should already be in state true
	I0701 23:09:03.424347  245311 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424405  245311 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424432  245311 host.go:66] Checking if "old-k8s-version-20220701225700-10066" exists ...
	I0701 23:09:03.424385  245311 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220701225700-10066"
	W0701 23:09:03.424508  245311 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:09:03.424433  245311 host.go:66] Checking if "old-k8s-version-20220701225700-10066" exists ...
	I0701 23:09:03.424569  245311 host.go:66] Checking if "old-k8s-version-20220701225700-10066" exists ...
	I0701 23:09:03.424817  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.424971  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.424991  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.424995  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.481261  245311 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:09:03.482772  245311 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:09:03.483961  245311 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:09:03.483976  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:09:03.484018  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:09:03.482745  245311 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:09:03.482858  245311 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.486360  245311 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:09:03.485421  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W0701 23:09:03.485434  245311 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:09:03.487590  245311 host.go:66] Checking if "old-k8s-version-20220701225700-10066" exists ...
	I0701 23:09:03.487616  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:09:03.487669  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:09:03.487702  245311 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:09:03.487728  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:09:03.487778  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:09:03.488071  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.534661  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:09:03.535597  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:09:03.539275  245311 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:09:03.539299  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:09:03.539346  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:09:03.546251  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:09:03.548609  245311 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220701225700-10066" to be "Ready" ...
	I0701 23:09:03.548716  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:09:03.551308  245311 node_ready.go:49] node "old-k8s-version-20220701225700-10066" has status "Ready":"True"
	I0701 23:09:03.551327  245311 node_ready.go:38] duration metric: took 2.690293ms waiting for node "old-k8s-version-20220701225700-10066" to be "Ready" ...
	I0701 23:09:03.551337  245311 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:09:03.556017  245311 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace to be "Ready" ...
	I0701 23:09:03.581704  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:09:03.734911  245311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:09:03.735717  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:09:03.735740  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:09:03.735948  245311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:09:03.737730  245311 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:09:03.737747  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:09:03.828527  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:09:03.828554  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:09:03.832137  245311 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:09:03.832158  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:09:03.918104  245311 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:09:03.918134  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:09:03.919243  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:09:03.919297  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:09:03.935452  245311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:09:03.936915  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:09:03.936936  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:09:04.022810  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:09:04.022839  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:09:04.041223  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:09:04.041259  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:09:04.126860  245311 start.go:809] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0701 23:09:04.129586  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:09:04.129609  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:09:04.145754  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:09:04.145824  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:09:04.226029  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:09:04.226067  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:09:04.241856  245311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:09:04.735718  245311 addons.go:383] Verifying addon metrics-server=true in "old-k8s-version-20220701225700-10066"
	I0701 23:09:05.228327  245311 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0701 23:09:05.229445  245311 addons.go:414] enableAddons completed in 1.808670394s
	I0701 23:09:05.635863  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:08.066674  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:10.566292  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:12.634786  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:15.066975  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:17.566914  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:20.066552  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:22.067112  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:24.566713  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:27.066332  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:29.066984  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:31.566393  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:34.066231  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:36.066602  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:38.566970  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:41.066175  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:43.066510  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:45.066856  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:47.566973  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:50.066024  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:52.066322  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:54.566820  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:57.066159  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:59.066966  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:01.565479  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	cd8a4893d2d37       6fb66cd78abfe       3 minutes ago       Exited              kindnet-cni               3                   b4daedbbfa0f4
	b6c46b43c578c       a634548d10b03       12 minutes ago      Running             kube-proxy                0                   d3671c6594e46
	ac54680228313       5d725196c1f47       12 minutes ago      Running             kube-scheduler            0                   df504f599edde
	9f4bd4048f717       d3377ffb7177c       12 minutes ago      Running             kube-apiserver            0                   7f2c7d420e188
	6af50f79ce840       34cdf99b1bb3b       12 minutes ago      Running             kube-controller-manager   0                   397a5ee302dea
	b90cae4e4b7ea       aebe758cef4cd       12 minutes ago      Running             etcd                      0                   172c2b390191b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 22:57:21 UTC, end at Fri 2022-07-01 23:10:04 UTC. --
	Jul 01 23:03:27 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:27.538580207Z" level=warning msg="cleaning up after shim disconnected" id=3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37 namespace=k8s.io
	Jul 01 23:03:27 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:27.538604058Z" level=info msg="cleaning up dead shim"
	Jul 01 23:03:27 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:27.548145862Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:03:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2810 runtime=io.containerd.runc.v2\n"
	Jul 01 23:03:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:28.140727530Z" level=info msg="RemoveContainer for \"a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d\""
	Jul 01 23:03:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:28.148517115Z" level=info msg="RemoveContainer for \"a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d\" returns successfully"
	Jul 01 23:03:43 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:43.455699740Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jul 01 23:03:43 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:43.470747769Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\""
	Jul 01 23:03:43 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:43.471222249Z" level=info msg="StartContainer for \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\""
	Jul 01 23:03:43 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:43.620792110Z" level=info msg="StartContainer for \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\" returns successfully"
	Jul 01 23:06:23 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:23.968125385Z" level=info msg="shim disconnected" id=642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34
	Jul 01 23:06:23 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:23.968188273Z" level=warning msg="cleaning up after shim disconnected" id=642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34 namespace=k8s.io
	Jul 01 23:06:23 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:23.968201621Z" level=info msg="cleaning up dead shim"
	Jul 01 23:06:23 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:23.978062703Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:06:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2920 runtime=io.containerd.runc.v2\n"
	Jul 01 23:06:24 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:24.464297829Z" level=info msg="RemoveContainer for \"3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37\""
	Jul 01 23:06:24 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:24.468334690Z" level=info msg="RemoveContainer for \"3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37\" returns successfully"
	Jul 01 23:06:48 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:48.455940723Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jul 01 23:06:48 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:48.467762103Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602\""
	Jul 01 23:06:48 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:48.468233956Z" level=info msg="StartContainer for \"cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602\""
	Jul 01 23:06:48 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:48.538633288Z" level=info msg="StartContainer for \"cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602\" returns successfully"
	Jul 01 23:09:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:28.971114283Z" level=info msg="shim disconnected" id=cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602
	Jul 01 23:09:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:28.971176759Z" level=warning msg="cleaning up after shim disconnected" id=cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602 namespace=k8s.io
	Jul 01 23:09:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:28.971186825Z" level=info msg="cleaning up dead shim"
	Jul 01 23:09:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:28.981049968Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:09:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3025 runtime=io.containerd.runc.v2\n"
	Jul 01 23:09:29 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:29.803314781Z" level=info msg="RemoveContainer for \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\""
	Jul 01 23:09:29 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:29.807628497Z" level=info msg="RemoveContainer for \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220701225718-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220701225718-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=no-preload-20220701225718-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T22_57_50_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 22:57:44 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220701225718-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:10:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:08:30 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:08:30 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:08:30 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:08:30 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220701225718-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                82dabe3f-d133-4afb-a4d2-ee1450b85ce0
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220701225718-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-b5wkl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-20220701225718-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-20220701225718-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-5ck82                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-20220701225718-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m   node-controller  Node no-preload-20220701225718-10066 event: Registered Node no-preload-20220701225718-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228] <==
	* {"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T22:57:40.722Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T22:57:40.723Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T22:57:40.723Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-07-01T22:57:45.376Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.245133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:discovery\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2022-07-01T22:57:45.376Z","caller":"traceutil/trace.go:171","msg":"trace[98245775] range","detail":"{range_begin:/registry/clusterroles/system:discovery; range_end:; response_count:0; response_revision:80; }","duration":"100.3626ms","start":"2022-07-01T22:57:45.275Z","end":"2022-07-01T22:57:45.376Z","steps":["trace[98245775] 'agreement among raft nodes before linearized reading'  (duration: 96.832114ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:57:49.038Z","caller":"traceutil/trace.go:171","msg":"trace[1927820922] linearizableReadLoop","detail":"{readStateIndex:259; appliedIndex:259; }","duration":"109.87435ms","start":"2022-07-01T22:57:48.928Z","end":"2022-07-01T22:57:49.038Z","steps":["trace[1927820922] 'read index received'  (duration: 109.866721ms)","trace[1927820922] 'applied index is now lower than readState.Index'  (duration: 6.557µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:49.038Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.030645ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:6186"}
	{"level":"info","ts":"2022-07-01T22:57:49.038Z","caller":"traceutil/trace.go:171","msg":"trace[1867140299] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:252; }","duration":"110.085806ms","start":"2022-07-01T22:57:48.928Z","end":"2022-07-01T22:57:49.038Z","steps":["trace[1867140299] 'agreement among raft nodes before linearized reading'  (duration: 109.986775ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:57:49.448Z","caller":"traceutil/trace.go:171","msg":"trace[2075058343] linearizableReadLoop","detail":"{readStateIndex:261; appliedIndex:261; }","duration":"120.342992ms","start":"2022-07-01T22:57:49.328Z","end":"2022-07-01T22:57:49.448Z","steps":["trace[2075058343] 'read index received'  (duration: 120.337394ms)","trace[2075058343] 'applied index is now lower than readState.Index'  (duration: 4.619µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:49.448Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.51147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:4098"}
	{"level":"info","ts":"2022-07-01T22:57:49.448Z","caller":"traceutil/trace.go:171","msg":"trace[1223968225] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:252; }","duration":"120.565364ms","start":"2022-07-01T22:57:49.328Z","end":"2022-07-01T22:57:49.448Z","steps":["trace[1223968225] 'agreement among raft nodes before linearized reading'  (duration: 120.458386ms)"],"step_count":1}
	{"level":"warn","ts":"2022-07-01T22:57:50.278Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"155.261704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:3094"}
	{"level":"info","ts":"2022-07-01T22:57:50.278Z","caller":"traceutil/trace.go:171","msg":"trace[2055409456] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:258; }","duration":"155.385267ms","start":"2022-07-01T22:57:50.122Z","end":"2022-07-01T22:57:50.278Z","steps":["trace[2055409456] 'agreement among raft nodes before linearized reading'  (duration: 70.598719ms)","trace[2055409456] 'range keys from in-memory index tree'  (duration: 84.618935ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T22:57:50.278Z","caller":"traceutil/trace.go:171","msg":"trace[2103903859] transaction","detail":"{read_only:false; response_revision:259; number_of_response:1; }","duration":"149.94207ms","start":"2022-07-01T22:57:50.128Z","end":"2022-07-01T22:57:50.278Z","steps":["trace[2103903859] 'process raft request'  (duration: 65.293492ms)","trace[2103903859] 'compare'  (duration: 84.545015ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T22:57:51.109Z","caller":"traceutil/trace.go:171","msg":"trace[781945869] linearizableReadLoop","detail":"{readStateIndex:272; appliedIndex:272; }","duration":"174.267022ms","start":"2022-07-01T22:57:50.935Z","end":"2022-07-01T22:57:51.109Z","steps":["trace[781945869] 'read index received'  (duration: 174.257127ms)","trace[781945869] 'applied index is now lower than readState.Index'  (duration: 8.133µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:51.175Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"240.517113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-07-01T22:57:51.175Z","caller":"traceutil/trace.go:171","msg":"trace[1713793044] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:262; }","duration":"240.608471ms","start":"2022-07-01T22:57:50.935Z","end":"2022-07-01T22:57:51.175Z","steps":["trace[1713793044] 'agreement among raft nodes before linearized reading'  (duration: 174.376947ms)","trace[1713793044] 'range keys from in-memory index tree'  (duration: 66.10988ms)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:52.428Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.210117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:4359"}
	{"level":"info","ts":"2022-07-01T22:57:52.428Z","caller":"traceutil/trace.go:171","msg":"trace[322197520] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:265; }","duration":"103.318537ms","start":"2022-07-01T22:57:52.325Z","end":"2022-07-01T22:57:52.428Z","steps":["trace[322197520] 'range keys from in-memory index tree'  (duration: 103.086305ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:58:35.992Z","caller":"traceutil/trace.go:171","msg":"trace[372511059] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"131.829529ms","start":"2022-07-01T22:58:35.860Z","end":"2022-07-01T22:58:35.992Z","steps":["trace[372511059] 'process raft request'  (duration: 34.207641ms)","trace[372511059] 'compare'  (duration: 97.515253ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T23:07:41.638Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":451}
	{"level":"info","ts":"2022-07-01T23:07:41.639Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":451,"took":"426.131µs"}
	
	* 
	* ==> kernel <==
	*  23:10:04 up 52 min,  0 users,  load average: 0.65, 1.25, 1.80
	Linux no-preload-20220701225718-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012] <==
	* I0701 22:57:44.359693       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 22:57:44.361586       1 cache.go:39] Caches are synced for autoregister controller
	I0701 22:57:44.417898       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0701 22:57:44.418504       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0701 22:57:44.418589       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0701 22:57:44.418642       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0701 22:57:44.418677       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 22:57:44.937777       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 22:57:45.263186       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0701 22:57:45.266534       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0701 22:57:45.266585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 22:57:45.755535       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 22:57:45.789142       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 22:57:45.863518       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0701 22:57:45.869086       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0701 22:57:45.870171       1 controller.go:611] quota admission added evaluator for: endpoints
	I0701 22:57:45.873910       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 22:57:46.404473       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0701 22:57:47.255908       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0701 22:57:47.263186       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0701 22:57:47.272732       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0701 22:57:47.350282       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 22:58:00.132849       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0701 22:58:00.481609       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0701 22:58:01.229093       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462] <==
	* I0701 22:57:59.430004       1 shared_informer.go:262] Caches are synced for endpoint
	I0701 22:57:59.430047       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0701 22:57:59.432355       1 shared_informer.go:262] Caches are synced for job
	I0701 22:57:59.437580       1 shared_informer.go:262] Caches are synced for PV protection
	I0701 22:57:59.523196       1 shared_informer.go:262] Caches are synced for taint
	I0701 22:57:59.523308       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0701 22:57:59.523359       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0701 22:57:59.523407       1 node_lifecycle_controller.go:1014] Missing timestamp for Node no-preload-20220701225718-10066. Assuming now as a timestamp.
	I0701 22:57:59.523479       1 event.go:294] "Event occurred" object="no-preload-20220701225718-10066" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-20220701225718-10066 event: Registered Node no-preload-20220701225718-10066 in Controller"
	I0701 22:57:59.523500       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0701 22:57:59.599326       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0701 22:57:59.611223       1 shared_informer.go:262] Caches are synced for stateful set
	I0701 22:57:59.625707       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 22:57:59.631783       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 22:57:59.679597       1 shared_informer.go:262] Caches are synced for daemon sets
	I0701 22:58:00.099864       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 22:58:00.128120       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 22:58:00.128144       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0701 22:58:00.134791       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0701 22:58:00.470246       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0701 22:58:00.486736       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5ck82"
	I0701 22:58:00.488364       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b5wkl"
	I0701 22:58:00.541423       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jzmvd"
	I0701 22:58:00.547729       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-mbfz4"
	I0701 22:58:00.567152       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-jzmvd"
	
	* 
	* ==> kube-proxy [b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8] <==
	* I0701 22:58:01.121577       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0701 22:58:01.121673       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0701 22:58:01.121706       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 22:58:01.224547       1 server_others.go:206] "Using iptables Proxier"
	I0701 22:58:01.224586       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 22:58:01.224598       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 22:58:01.224617       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 22:58:01.224645       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 22:58:01.224819       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 22:58:01.225041       1 server.go:661] "Version info" version="v1.24.2"
	I0701 22:58:01.225053       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 22:58:01.225770       1 config.go:226] "Starting endpoint slice config controller"
	I0701 22:58:01.225786       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 22:58:01.225872       1 config.go:317] "Starting service config controller"
	I0701 22:58:01.225877       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 22:58:01.226097       1 config.go:444] "Starting node config controller"
	I0701 22:58:01.226102       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 22:58:01.325962       1 shared_informer.go:262] Caches are synced for service config
	I0701 22:58:01.326036       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 22:58:01.326305       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8] <==
	* E0701 22:57:44.348537       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 22:57:44.348542       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:44.349659       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 22:57:44.349704       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 22:57:44.349737       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 22:57:44.349780       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 22:57:45.297253       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 22:57:45.297294       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 22:57:45.344779       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.344819       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:45.359826       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 22:57:45.359853       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 22:57:45.425348       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 22:57:45.425400       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 22:57:45.441898       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 22:57:45.441930       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 22:57:45.447744       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 22:57:45.447773       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 22:57:45.475136       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 22:57:45.475182       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 22:57:45.483371       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.483409       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:45.598153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.598194       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0701 22:57:47.044759       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 22:57:21 UTC, end at Fri 2022-07-01 23:10:04 UTC. --
	Jul 01 23:08:37 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:37.793805    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:08:42 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:42.795064    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:08:47 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:47.795873    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:08:52 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:52.796480    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:08:57 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:57.797443    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:02 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:02.798339    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:07 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:07.799070    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:12 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:12.799969    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:17 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:17.800524    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:22 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:22.801790    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:27 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:27.802792    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:29 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:09:29.802017    1741 scope.go:110] "RemoveContainer" containerID="642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34"
	Jul 01 23:09:29 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:09:29.802314    1741 scope.go:110] "RemoveContainer" containerID="cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	Jul 01 23:09:29 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:29.802716    1741 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-b5wkl_kube-system(bc770683-78b7-449f-a0af-5a2cc006275c)\"" pod="kube-system/kindnet-b5wkl" podUID=bc770683-78b7-449f-a0af-5a2cc006275c
	Jul 01 23:09:32 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:32.803650    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:37 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:37.804637    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:41 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:09:41.453769    1741 scope.go:110] "RemoveContainer" containerID="cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	Jul 01 23:09:41 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:41.454069    1741 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-b5wkl_kube-system(bc770683-78b7-449f-a0af-5a2cc006275c)\"" pod="kube-system/kindnet-b5wkl" podUID=bc770683-78b7-449f-a0af-5a2cc006275c
	Jul 01 23:09:42 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:42.806008    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:47 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:47.807388    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:52 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:52.808776    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:53 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:09:53.453199    1741 scope.go:110] "RemoveContainer" containerID="cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	Jul 01 23:09:53 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:53.453457    1741 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-b5wkl_kube-system(bc770683-78b7-449f-a0af-5a2cc006275c)\"" pod="kube-system/kindnet-b5wkl" podUID=bc770683-78b7-449f-a0af-5a2cc006275c
	Jul 01 23:09:57 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:57.809897    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:10:02 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:10:02.811183    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-6d4b75cb6d-mbfz4 storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 describe pod busybox coredns-6d4b75cb6d-mbfz4 storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220701225718-10066 describe pod busybox coredns-6d4b75cb6d-mbfz4 storage-provisioner: exit status 1 (57.397234ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9hcs (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-r9hcs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m47s (x2 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-mbfz4" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220701225718-10066 describe pod busybox coredns-6d4b75cb6d-mbfz4 storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220701225718-10066
helpers_test.go:235: (dbg) docker inspect no-preload-20220701225718-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff",
	        "Created": "2022-07-01T22:57:20.298940328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T22:57:20.663867782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hostname",
	        "HostsPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hosts",
	        "LogPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff-json.log",
	        "Name": "/no-preload-20220701225718-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220701225718-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220701225718-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220701225718-10066",
	                "Source": "/var/lib/docker/volumes/no-preload-20220701225718-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220701225718-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "name.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "865eb475db627936c6b81b6e3b702ce9e018b17349e5ddb5dde9edb749dbced7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49402"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49399"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/865eb475db62",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220701225718-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6714999bf303",
	                        "no-preload-20220701225718-10066"
	                    ],
	                    "NetworkID": "1edec7b6219d6237636ff26267a26187f0ef2e748e4635b07760f0d37cc8596c",
	                    "EndpointID": "0377a99704388e0f2c261b850c52bf87fff4b394cc37a39d49723586e5d2f940",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220701225718-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | kubernetes-upgrade-20220701225105-10066                    |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | disable-driver-mounts-20220701230032-10066                 |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:02 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC |                     |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |          |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:06 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:06:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:06:34.414097  258995 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:06:34.414279  258995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:06:34.414290  258995 out.go:309] Setting ErrFile to fd 2...
	I0701 23:06:34.414298  258995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:06:34.414739  258995 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:06:34.414979  258995 out.go:303] Setting JSON to false
	I0701 23:06:34.416593  258995 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2947,"bootTime":1656713847,"procs":738,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:06:34.416656  258995 start.go:125] virtualization: kvm guest
	I0701 23:06:34.419067  258995 out.go:177] * [newest-cni-20220701230537-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:06:34.420633  258995 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:06:34.420662  258995 notify.go:193] Checking for updates...
	I0701 23:06:34.422188  258995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:06:34.423704  258995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:06:34.425146  258995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:06:34.426591  258995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:06:34.428291  258995 config.go:178] Loaded profile config "newest-cni-20220701230537-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:06:34.428771  258995 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:06:34.470638  258995 docker.go:137] docker version: linux-20.10.17
	I0701 23:06:34.470753  258995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:06:34.579013  258995 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-01 23:06:34.501366324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:06:34.579159  258995 docker.go:254] overlay module found
	I0701 23:06:34.581456  258995 out.go:177] * Using the docker driver based on existing profile
	I0701 23:06:34.582890  258995 start.go:284] selected driver: docker
	I0701 23:06:34.582907  258995 start.go:808] validating driver "docker" against &{Name:newest-cni-20220701230537-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220701230537-10066 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false defaul
t_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:06:34.583008  258995 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:06:34.583884  258995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:06:34.690709  258995 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-01 23:06:34.615635547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:06:34.690944  258995 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0701 23:06:34.690963  258995 cni.go:95] Creating CNI manager for ""
	I0701 23:06:34.690974  258995 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:06:34.690988  258995 start_flags.go:310] config:
	{Name:newest-cni-20220701230537-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220701230537-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHost
Timeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:06:34.692981  258995 out.go:177] * Starting control plane node newest-cni-20220701230537-10066 in cluster newest-cni-20220701230537-10066
	I0701 23:06:34.694495  258995 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:06:34.695843  258995 out.go:177] * Pulling base image ...
	I0701 23:06:34.697133  258995 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:06:34.697166  258995 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:06:34.697170  258995 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:06:34.697173  258995 cache.go:57] Caching tarball of preloaded images
	I0701 23:06:34.697459  258995 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:06:34.697487  258995 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:06:34.697601  258995 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/config.json ...
	I0701 23:06:34.730986  258995 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:06:34.731010  258995 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:06:34.731023  258995 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:06:34.731054  258995 start.go:352] acquiring machines lock for newest-cni-20220701230537-10066: {Name:mk09082a8962197bf1403d5caed70fa1c313958d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:06:34.731127  258995 start.go:356] acquired machines lock for "newest-cni-20220701230537-10066" in 55.863µs
	I0701 23:06:34.731145  258995 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:06:34.731149  258995 fix.go:55] fixHost starting: 
	I0701 23:06:34.731361  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:06:34.765033  258995 fix.go:103] recreateIfNeeded on newest-cni-20220701230537-10066: state=Stopped err=<nil>
	W0701 23:06:34.765086  258995 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:06:34.767404  258995 out.go:177] * Restarting existing docker container for "newest-cni-20220701230537-10066" ...
	I0701 23:06:34.165114  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:36.663648  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:34.768673  258995 cli_runner.go:164] Run: docker start newest-cni-20220701230537-10066
	I0701 23:06:35.162994  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:06:35.200123  258995 kic.go:416] container "newest-cni-20220701230537-10066" state is running.
	I0701 23:06:35.200495  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220701230537-10066
	I0701 23:06:35.234865  258995 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/config.json ...
	I0701 23:06:35.235114  258995 machine.go:88] provisioning docker machine ...
	I0701 23:06:35.235145  258995 ubuntu.go:169] provisioning hostname "newest-cni-20220701230537-10066"
	I0701 23:06:35.235200  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:35.268767  258995 main.go:134] libmachine: Using SSH client type: native
	I0701 23:06:35.268976  258995 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0701 23:06:35.269001  258995 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220701230537-10066 && echo "newest-cni-20220701230537-10066" | sudo tee /etc/hostname
	I0701 23:06:35.269725  258995 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51796->127.0.0.1:49432: read: connection reset by peer
	I0701 23:06:38.394868  258995 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220701230537-10066
	
	I0701 23:06:38.394942  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:38.430841  258995 main.go:134] libmachine: Using SSH client type: native
	I0701 23:06:38.431017  258995 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0701 23:06:38.431050  258995 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220701230537-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220701230537-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220701230537-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:06:38.546040  258995 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:06:38.546068  258995 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:06:38.546099  258995 ubuntu.go:177] setting up certificates
	I0701 23:06:38.546106  258995 provision.go:83] configureAuth start
	I0701 23:06:38.546147  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220701230537-10066
	I0701 23:06:38.580939  258995 provision.go:138] copyHostCerts
	I0701 23:06:38.581009  258995 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:06:38.581028  258995 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:06:38.581116  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:06:38.581229  258995 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:06:38.581242  258995 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:06:38.581284  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:06:38.581355  258995 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:06:38.581367  258995 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:06:38.581403  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:06:38.581462  258995 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220701230537-10066 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220701230537-10066]
	I0701 23:06:38.757827  258995 provision.go:172] copyRemoteCerts
	I0701 23:06:38.757890  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:06:38.757937  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:38.792195  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:38.881702  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:06:38.899024  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0701 23:06:38.915653  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 23:06:38.932402  258995 provision.go:86] duration metric: configureAuth took 386.287684ms
	I0701 23:06:38.932423  258995 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:06:38.932594  258995 config.go:178] Loaded profile config "newest-cni-20220701230537-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:06:38.932607  258995 machine.go:91] provisioned docker machine in 3.697475646s
	I0701 23:06:38.932616  258995 start.go:306] post-start starting for "newest-cni-20220701230537-10066" (driver="docker")
	I0701 23:06:38.932622  258995 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:06:38.932671  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:06:38.932715  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:38.969297  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:39.054415  258995 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:06:39.057045  258995 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:06:39.057067  258995 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:06:39.057075  258995 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:06:39.057081  258995 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:06:39.057089  258995 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:06:39.057136  258995 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:06:39.057208  258995 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:06:39.057285  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:06:39.063810  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:06:39.081005  258995 start.go:309] post-start completed in 148.379288ms
	I0701 23:06:39.081068  258995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:06:39.081108  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:39.114447  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:39.194921  258995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:06:39.199018  258995 fix.go:57] fixHost completed within 4.467863189s
	I0701 23:06:39.199040  258995 start.go:81] releasing machines lock for "newest-cni-20220701230537-10066", held for 4.467899434s
	I0701 23:06:39.199121  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220701230537-10066
	I0701 23:06:39.234497  258995 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:06:39.234517  258995 ssh_runner.go:195] Run: systemctl --version
	I0701 23:06:39.234595  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:39.234601  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:06:39.269988  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:39.270685  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:06:39.372455  258995 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:06:39.383372  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:06:39.392509  258995 docker.go:179] disabling docker service ...
	I0701 23:06:39.392563  258995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:06:39.401779  258995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:06:39.410110  258995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:06:38.664279  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:40.664458  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:39.492683  258995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:06:39.567308  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:06:39.575996  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:06:39.588820  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:06:39.596772  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:06:39.604536  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:06:39.612557  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:06:39.620178  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:06:39.627399  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:06:39.640030  258995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:06:39.646248  258995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:06:39.652664  258995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:06:39.723703  258995 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:06:39.793613  258995 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:06:39.793683  258995 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:06:39.797185  258995 start.go:471] Will wait 60s for crictl version
	I0701 23:06:39.797234  258995 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:06:39.823490  258995 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:06:39Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:06:43.164221  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:45.164804  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:47.664048  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:50.870686  258995 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:06:50.894066  258995 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:06:50.894125  258995 ssh_runner.go:195] Run: containerd --version
	I0701 23:06:50.922238  258995 ssh_runner.go:195] Run: containerd --version
	I0701 23:06:50.953006  258995 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:06:50.954593  258995 cli_runner.go:164] Run: docker network inspect newest-cni-20220701230537-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:06:50.988248  258995 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0701 23:06:50.991464  258995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:06:51.002170  258995 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0701 23:06:49.664251  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:52.164105  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:51.003471  258995 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:06:51.003534  258995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:06:51.026300  258995 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:06:51.026322  258995 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:06:51.026369  258995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:06:51.049194  258995 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:06:51.049211  258995 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:06:51.049251  258995 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:06:51.072512  258995 cni.go:95] Creating CNI manager for ""
	I0701 23:06:51.072531  258995 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:06:51.072542  258995 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0701 23:06:51.072554  258995 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220701230537-10066 NodeName:newest-cni-20220701230537-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-ele
ct:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:06:51.072676  258995 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220701230537-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:06:51.072751  258995 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220701230537-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220701230537-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 23:06:51.072793  258995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:06:51.079832  258995 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:06:51.079884  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:06:51.086363  258995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (561 bytes)
	I0701 23:06:51.098788  258995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:06:51.111342  258995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2196 bytes)
	I0701 23:06:51.123722  258995 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:06:51.126643  258995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:06:51.135317  258995 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066 for IP: 192.168.67.2
	I0701 23:06:51.135416  258995 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:06:51.135484  258995 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:06:51.135580  258995 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/client.key
	I0701 23:06:51.135648  258995 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/apiserver.key.c7fa3a9e
	I0701 23:06:51.135702  258995 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/proxy-client.key
	I0701 23:06:51.135842  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:06:51.135889  258995 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:06:51.135906  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:06:51.135941  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:06:51.135976  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:06:51.136009  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:06:51.136063  258995 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:06:51.136808  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:06:51.153739  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:06:51.171043  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:06:51.187876  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/newest-cni-20220701230537-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0701 23:06:51.205092  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:06:51.222443  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:06:51.239190  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:06:51.255686  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:06:51.272278  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:06:51.288886  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:06:51.305646  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:06:51.321574  258995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:06:51.333572  258995 ssh_runner.go:195] Run: openssl version
	I0701 23:06:51.337924  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:06:51.344691  258995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:06:51.347642  258995 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:06:51.347698  258995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:06:51.352115  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:06:51.358296  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:06:51.365046  258995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:06:51.367828  258995 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:06:51.367863  258995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:06:51.372300  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:06:51.378814  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:06:51.385751  258995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:06:51.388493  258995 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:06:51.388532  258995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:06:51.393152  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:06:51.399411  258995 kubeadm.go:395] StartCluster: {Name:newest-cni-20220701230537-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220701230537-10066 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:fals
e kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:06:51.399484  258995 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:06:51.399513  258995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:06:51.423320  258995 cri.go:87] found id: "15e59c91bb35a121a6064d222855fdf8348aa49db52a0402e51c98a37db31217"
	I0701 23:06:51.423349  258995 cri.go:87] found id: "ab56834fe0ef673390b371b4f10026a71c82af260e19d3cb94a92d31ab2ab128"
	I0701 23:06:51.423358  258995 cri.go:87] found id: "e704eecd1e08fe666557c22416eea4b800bdb008c82897995c74cbca68f3dc25"
	I0701 23:06:51.423364  258995 cri.go:87] found id: "c99e390f96da42151f2fad7726b93e164928077e543aada0e5236cfc43666e39"
	I0701 23:06:51.423370  258995 cri.go:87] found id: "e725efc4c7a224927581ed2f09c7cf2d7df8a546cf0b0bc29620d113ee10791e"
	I0701 23:06:51.423376  258995 cri.go:87] found id: "150015fbfd6f8938126af2269f26d19d16e82786e8d6b2b5f0b62657b7a03a80"
	I0701 23:06:51.423382  258995 cri.go:87] found id: ""
	I0701 23:06:51.423411  258995 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:06:51.435450  258995 cri.go:114] JSON = null
	W0701 23:06:51.435489  258995 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:06:51.435526  258995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:06:51.441918  258995 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:06:51.441934  258995 kubeadm.go:626] restartCluster start
	I0701 23:06:51.441964  258995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:06:51.448643  258995 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:51.449685  258995 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220701230537-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:06:51.450233  258995 kubeconfig.go:127] "newest-cni-20220701230537-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:06:51.451274  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:06:51.452754  258995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:06:51.459930  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:51.459971  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:51.467554  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:51.667947  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:51.668010  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:51.677081  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:51.868396  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:51.868468  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:51.877241  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.068521  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.068611  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.077586  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.267700  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.267771  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.276288  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.468495  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.468583  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.477308  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.668618  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.668753  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.677279  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:52.868579  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:52.868641  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:52.877568  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.067780  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.067852  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.077081  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.268390  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.268471  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.276990  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.468289  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.468362  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.476813  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.668091  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.668156  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.676949  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:53.868230  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:53.868307  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:53.876772  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.068055  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:54.068126  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:54.076663  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.267936  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:54.267998  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:54.276276  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.164784  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:56.663515  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:06:54.467844  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:54.467916  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:54.476315  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.476336  258995 api_server.go:165] Checking apiserver status ...
	I0701 23:06:54.476368  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:06:54.484053  258995 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.484075  258995 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:06:54.484083  258995 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:06:54.484108  258995 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:06:54.484158  258995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:06:54.506999  258995 cri.go:87] found id: "15e59c91bb35a121a6064d222855fdf8348aa49db52a0402e51c98a37db31217"
	I0701 23:06:54.507021  258995 cri.go:87] found id: "ab56834fe0ef673390b371b4f10026a71c82af260e19d3cb94a92d31ab2ab128"
	I0701 23:06:54.507030  258995 cri.go:87] found id: "e704eecd1e08fe666557c22416eea4b800bdb008c82897995c74cbca68f3dc25"
	I0701 23:06:54.507040  258995 cri.go:87] found id: "c99e390f96da42151f2fad7726b93e164928077e543aada0e5236cfc43666e39"
	I0701 23:06:54.507049  258995 cri.go:87] found id: "e725efc4c7a224927581ed2f09c7cf2d7df8a546cf0b0bc29620d113ee10791e"
	I0701 23:06:54.507063  258995 cri.go:87] found id: "150015fbfd6f8938126af2269f26d19d16e82786e8d6b2b5f0b62657b7a03a80"
	I0701 23:06:54.507076  258995 cri.go:87] found id: ""
	I0701 23:06:54.507088  258995 cri.go:232] Stopping containers: [15e59c91bb35a121a6064d222855fdf8348aa49db52a0402e51c98a37db31217 ab56834fe0ef673390b371b4f10026a71c82af260e19d3cb94a92d31ab2ab128 e704eecd1e08fe666557c22416eea4b800bdb008c82897995c74cbca68f3dc25 c99e390f96da42151f2fad7726b93e164928077e543aada0e5236cfc43666e39 e725efc4c7a224927581ed2f09c7cf2d7df8a546cf0b0bc29620d113ee10791e 150015fbfd6f8938126af2269f26d19d16e82786e8d6b2b5f0b62657b7a03a80]
	I0701 23:06:54.507133  258995 ssh_runner.go:195] Run: which crictl
	I0701 23:06:54.509739  258995 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 15e59c91bb35a121a6064d222855fdf8348aa49db52a0402e51c98a37db31217 ab56834fe0ef673390b371b4f10026a71c82af260e19d3cb94a92d31ab2ab128 e704eecd1e08fe666557c22416eea4b800bdb008c82897995c74cbca68f3dc25 c99e390f96da42151f2fad7726b93e164928077e543aada0e5236cfc43666e39 e725efc4c7a224927581ed2f09c7cf2d7df8a546cf0b0bc29620d113ee10791e 150015fbfd6f8938126af2269f26d19d16e82786e8d6b2b5f0b62657b7a03a80
	I0701 23:06:54.533856  258995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:06:54.543470  258995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:06:54.550132  258995 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 23:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul  1 23:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul  1 23:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul  1 23:05 /etc/kubernetes/scheduler.conf
	
	I0701 23:06:54.550194  258995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 23:06:54.557308  258995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 23:06:54.563693  258995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 23:06:54.570164  258995 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.570204  258995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:06:54.576255  258995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 23:06:54.582576  258995 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:06:54.582612  258995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:06:54.588536  258995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:06:54.594900  258995 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:06:54.594920  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:54.638253  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:55.351096  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:55.538288  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:55.589369  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:06:55.726097  258995 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:06:55.726152  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:06:56.235342  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:06:56.735369  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:06:56.747981  258995 api_server.go:71] duration metric: took 1.02187943s to wait for apiserver process to appear ...
	I0701 23:06:56.748013  258995 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:06:56.748027  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:06:56.748455  258995 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0701 23:06:57.249176  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:06:59.872070  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 23:06:59.872161  258995 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 23:07:00.249349  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:07:00.254757  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:07:00.254789  258995 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:07:00.749355  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:07:00.753608  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:07:00.753630  258995 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:07:01.249222  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:07:01.253956  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0701 23:07:01.260004  258995 api_server.go:140] control plane version: v1.24.2
	I0701 23:07:01.260026  258995 api_server.go:130] duration metric: took 4.512008107s to wait for apiserver health ...
	I0701 23:07:01.260035  258995 cni.go:95] Creating CNI manager for ""
	I0701 23:07:01.260041  258995 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:07:01.262034  258995 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:07:01.263670  258995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:07:01.267622  258995 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:07:01.267646  258995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:07:01.282196  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:07:02.149306  258995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:07:02.155786  258995 system_pods.go:59] 9 kube-system pods found
	I0701 23:07:02.155815  258995 system_pods.go:61] "coredns-6d4b75cb6d-qgxl7" [80686edc-f7b3-4be5-a9d0-91b187b1b0bc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.155823  258995 system_pods.go:61] "etcd-newest-cni-20220701230537-10066" [7a5e1e12-f631-485a-8324-e3e2a26c67c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:07:02.155830  258995 system_pods.go:61] "kindnet-gj46g" [f7c7e015-c6c8-4e86-b1c3-de4ed4f1ea38] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:07:02.155836  258995 system_pods.go:61] "kube-apiserver-newest-cni-20220701230537-10066" [8960b5c1-6baa-444c-94f9-9d7f32b4a545] Running
	I0701 23:07:02.155845  258995 system_pods.go:61] "kube-controller-manager-newest-cni-20220701230537-10066" [e083017e-b2be-407f-b065-b42aae13b35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:07:02.155855  258995 system_pods.go:61] "kube-proxy-xgmtt" [6c60e6df-47d0-4ac9-9540-87afae30a047] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 23:07:02.155865  258995 system_pods.go:61] "kube-scheduler-newest-cni-20220701230537-10066" [42a3efda-99c0-4cb6-bf0f-25fd8457f229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0701 23:07:02.155872  258995 system_pods.go:61] "metrics-server-5c6f97fb75-jlkjx" [67a9c41d-57b7-47e2-b9c2-ec9787a1f3d2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.155878  258995 system_pods.go:61] "storage-provisioner" [0de651df-edaa-428d-96f4-1c501a24c13d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.155883  258995 system_pods.go:74] duration metric: took 6.555514ms to wait for pod list to return data ...
	I0701 23:07:02.155890  258995 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:07:02.157924  258995 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:07:02.157948  258995 node_conditions.go:123] node cpu capacity is 8
	I0701 23:07:02.157961  258995 node_conditions.go:105] duration metric: took 2.066517ms to run NodePressure ...
	I0701 23:07:02.157980  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:07:02.317385  258995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:07:02.324748  258995 ops.go:34] apiserver oom_adj: -16
	I0701 23:07:02.324771  258995 kubeadm.go:630] restartCluster took 10.882830695s
	I0701 23:07:02.324779  258995 kubeadm.go:397] StartCluster complete in 10.925374477s
	I0701 23:07:02.324797  258995 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:07:02.324908  258995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:07:02.326175  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:07:02.332030  258995 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220701230537-10066" rescaled to 1
	I0701 23:07:02.332098  258995 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:07:02.332116  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:07:02.334668  258995 out.go:177] * Verifying Kubernetes components...
	I0701 23:07:02.332189  258995 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0701 23:07:02.332347  258995 config.go:178] Loaded profile config "newest-cni-20220701230537-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:07:02.335960  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:07:02.335984  258995 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220701230537-10066"
	I0701 23:07:02.336001  258995 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220701230537-10066"
	I0701 23:07:02.336019  258995 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220701230537-10066"
	I0701 23:07:02.336022  258995 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220701230537-10066"
	W0701 23:07:02.336032  258995 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:07:02.336053  258995 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220701230537-10066"
	W0701 23:07:02.336068  258995 addons.go:162] addon metrics-server should already be in state true
	I0701 23:07:02.336087  258995 host.go:66] Checking if "newest-cni-20220701230537-10066" exists ...
	I0701 23:07:02.336115  258995 host.go:66] Checking if "newest-cni-20220701230537-10066" exists ...
	I0701 23:07:02.336024  258995 addons.go:65] Setting dashboard=true in profile "newest-cni-20220701230537-10066"
	I0701 23:07:02.336405  258995 addons.go:153] Setting addon dashboard=true in "newest-cni-20220701230537-10066"
	W0701 23:07:02.336419  258995 addons.go:162] addon dashboard should already be in state true
	I0701 23:07:02.336456  258995 host.go:66] Checking if "newest-cni-20220701230537-10066" exists ...
	I0701 23:07:02.336598  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.336649  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.336026  258995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220701230537-10066"
	I0701 23:07:02.336889  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.337135  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.387250  258995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:07:02.388620  258995 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:07:02.389854  258995 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:07:02.389876  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:07:02.388657  258995 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:07:02.389940  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:07:02.390008  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:07:02.391427  258995 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:07:02.389915  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:07:02.392909  258995 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:06:58.664292  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:01.164567  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:02.394340  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:07:02.394362  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:07:02.394411  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:07:02.397797  258995 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220701230537-10066"
	W0701 23:07:02.397823  258995 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:07:02.397849  258995 host.go:66] Checking if "newest-cni-20220701230537-10066" exists ...
	I0701 23:07:02.398360  258995 cli_runner.go:164] Run: docker container inspect newest-cni-20220701230537-10066 --format={{.State.Status}}
	I0701 23:07:02.418340  258995 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0701 23:07:02.418351  258995 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:07:02.418416  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:07:02.432659  258995 api_server.go:71] duration metric: took 100.522964ms to wait for apiserver process to appear ...
	I0701 23:07:02.432716  258995 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:07:02.432731  258995 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 23:07:02.438173  258995 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0701 23:07:02.439473  258995 api_server.go:140] control plane version: v1.24.2
	I0701 23:07:02.439496  258995 api_server.go:130] duration metric: took 6.77174ms to wait for apiserver health ...
	I0701 23:07:02.439506  258995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:07:02.440178  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:07:02.443373  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:07:02.446379  258995 system_pods.go:59] 9 kube-system pods found
	I0701 23:07:02.446414  258995 system_pods.go:61] "coredns-6d4b75cb6d-qgxl7" [80686edc-f7b3-4be5-a9d0-91b187b1b0bc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.446427  258995 system_pods.go:61] "etcd-newest-cni-20220701230537-10066" [7a5e1e12-f631-485a-8324-e3e2a26c67c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:07:02.446443  258995 system_pods.go:61] "kindnet-gj46g" [f7c7e015-c6c8-4e86-b1c3-de4ed4f1ea38] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:07:02.446462  258995 system_pods.go:61] "kube-apiserver-newest-cni-20220701230537-10066" [8960b5c1-6baa-444c-94f9-9d7f32b4a545] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0701 23:07:02.446483  258995 system_pods.go:61] "kube-controller-manager-newest-cni-20220701230537-10066" [e083017e-b2be-407f-b065-b42aae13b35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:07:02.446500  258995 system_pods.go:61] "kube-proxy-xgmtt" [6c60e6df-47d0-4ac9-9540-87afae30a047] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 23:07:02.446517  258995 system_pods.go:61] "kube-scheduler-newest-cni-20220701230537-10066" [42a3efda-99c0-4cb6-bf0f-25fd8457f229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0701 23:07:02.446533  258995 system_pods.go:61] "metrics-server-5c6f97fb75-jlkjx" [67a9c41d-57b7-47e2-b9c2-ec9787a1f3d2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.446560  258995 system_pods.go:61] "storage-provisioner" [0de651df-edaa-428d-96f4-1c501a24c13d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:07:02.446569  258995 system_pods.go:74] duration metric: took 7.055605ms to wait for pod list to return data ...
	I0701 23:07:02.446579  258995 default_sa.go:34] waiting for default service account to be created ...
	I0701 23:07:02.448805  258995 default_sa.go:45] found service account: "default"
	I0701 23:07:02.448826  258995 default_sa.go:55] duration metric: took 2.239766ms for default service account to be created ...
	I0701 23:07:02.448838  258995 kubeadm.go:572] duration metric: took 116.706896ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0701 23:07:02.448863  258995 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:07:02.451280  258995 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:07:02.451299  258995 node_conditions.go:123] node cpu capacity is 8
	I0701 23:07:02.451308  258995 node_conditions.go:105] duration metric: took 2.440834ms to run NodePressure ...
	I0701 23:07:02.451317  258995 start.go:216] waiting for startup goroutines ...
	I0701 23:07:02.453174  258995 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:07:02.453195  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:07:02.453240  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220701230537-10066
	I0701 23:07:02.455269  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:07:02.490429  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/newest-cni-20220701230537-10066/id_rsa Username:docker}
	I0701 23:07:02.541921  258995 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:07:02.541948  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:07:02.544002  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:07:02.548391  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:07:02.548414  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:07:02.556595  258995 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:07:02.556619  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:07:02.563079  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:07:02.563100  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:07:02.571062  258995 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:07:02.571085  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:07:02.621276  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:07:02.621307  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:07:02.629034  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:07:02.633706  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:07:02.638778  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:07:02.638800  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:07:02.719928  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:07:02.719955  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:07:02.736919  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:07:02.736949  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:07:02.752189  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:07:02.752223  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:07:02.837060  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:07:02.837088  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:07:02.921752  258995 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:07:02.921780  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:07:02.943594  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:07:03.163783  258995 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220701230537-10066"
	I0701 23:07:03.360662  258995 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0701 23:07:03.362046  258995 addons.go:414] enableAddons completed in 1.029861294s
	I0701 23:07:03.405602  258995 start.go:506] kubectl: 1.24.2, cluster: 1.24.2 (minor skew: 0)
	I0701 23:07:03.407966  258995 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220701230537-10066" cluster and "default" namespace by default
	I0701 23:07:03.664250  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:06.164316  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:08.164599  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:10.664063  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:12.664461  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:15.164133  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:17.164382  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:19.663632  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:21.663818  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:23.663967  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:26.164306  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:28.164930  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:30.664144  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:33.164605  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:35.663750  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:37.664339  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:40.164476  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:42.664249  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:45.164701  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:47.164787  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:49.664037  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:51.664111  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:54.164443  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:56.164925  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:07:58.664564  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:01.163987  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:03.164334  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:05.164460  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:07.164721  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:09.663979  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:11.664277  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:14.164029  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:16.164501  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:18.663698  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:20.664170  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:22.664334  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:25.164419  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:27.165462  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:29.663781  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:31.664057  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:34.164054  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace has status "Ready":"False"
	I0701 23:08:34.659436  245311 pod_ready.go:81] duration metric: took 4m0.00463364s waiting for pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace to be "Ready" ...
	E0701 23:08:34.659463  245311 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-9wsbl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:08:34.659488  245311 pod_ready.go:38] duration metric: took 4m0.009979733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:08:34.659524  245311 kubeadm.go:630] restartCluster took 5m9.668121905s
	W0701 23:08:34.659777  245311 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:08:34.659816  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:08:36.863542  245311 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.203696859s)
	I0701 23:08:36.863608  245311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:08:36.872901  245311 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:08:36.879734  245311 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:08:36.879789  245311 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:08:36.886494  245311 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:08:36.886622  245311 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:08:37.247242  245311 out.go:204]   - Generating certificates and keys ...
	I0701 23:08:37.829860  245311 out.go:204]   - Booting up control plane ...
	I0701 23:08:47.875461  245311 out.go:204]   - Configuring RBAC rules ...
	I0701 23:08:48.291666  245311 cni.go:95] Creating CNI manager for ""
	I0701 23:08:48.291688  245311 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:08:48.293328  245311 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:08:48.294742  245311 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:08:48.298483  245311 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0701 23:08:48.298500  245311 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:08:48.311207  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:08:48.651857  245311 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:08:48.651944  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=old-k8s-version-20220701225700-10066 minikube.k8s.io/updated_at=2022_07_01T23_08_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:48.651945  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:48.747820  245311 ops.go:34] apiserver oom_adj: -16
	I0701 23:08:48.747905  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:49.337647  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:49.837281  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:50.338027  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:50.837775  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:51.337990  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:51.837271  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:52.337894  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:52.837228  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:53.337964  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:53.837148  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:54.337173  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:54.837440  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:55.338035  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:55.837367  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:56.337614  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:56.837152  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:57.338000  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:57.837608  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:58.337540  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:58.837358  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:59.337869  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:08:59.837945  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:00.337719  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:00.837142  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:01.337879  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:01.838090  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:02.337084  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:02.838073  245311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:09:02.903841  245311 kubeadm.go:1045] duration metric: took 14.251975622s to wait for elevateKubeSystemPrivileges.
	I0701 23:09:02.903872  245311 kubeadm.go:397] StartCluster complete in 5m37.955831889s
	I0701 23:09:02.903898  245311 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:09:02.904009  245311 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:09:02.905291  245311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:09:03.420619  245311 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220701225700-10066" rescaled to 1
	I0701 23:09:03.420688  245311 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:09:03.422822  245311 out.go:177] * Verifying Kubernetes components...
	I0701 23:09:03.420742  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:09:03.420783  245311 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0701 23:09:03.420998  245311 config.go:178] Loaded profile config "old-k8s-version-20220701225700-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 23:09:03.424284  245311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:09:03.424342  245311 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424360  245311 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424367  245311 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424374  245311 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424379  245311 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220701225700-10066"
	W0701 23:09:03.424383  245311 addons.go:162] addon dashboard should already be in state true
	W0701 23:09:03.424386  245311 addons.go:162] addon metrics-server should already be in state true
	I0701 23:09:03.424347  245311 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424405  245311 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.424432  245311 host.go:66] Checking if "old-k8s-version-20220701225700-10066" exists ...
	I0701 23:09:03.424385  245311 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220701225700-10066"
	W0701 23:09:03.424508  245311 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:09:03.424433  245311 host.go:66] Checking if "old-k8s-version-20220701225700-10066" exists ...
	I0701 23:09:03.424569  245311 host.go:66] Checking if "old-k8s-version-20220701225700-10066" exists ...
	I0701 23:09:03.424817  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.424971  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.424991  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.424995  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.481261  245311 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:09:03.482772  245311 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:09:03.483961  245311 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:09:03.483976  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:09:03.484018  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:09:03.482745  245311 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:09:03.482858  245311 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220701225700-10066"
	I0701 23:09:03.486360  245311 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:09:03.485421  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W0701 23:09:03.485434  245311 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:09:03.487590  245311 host.go:66] Checking if "old-k8s-version-20220701225700-10066" exists ...
	I0701 23:09:03.487616  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:09:03.487669  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:09:03.487702  245311 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:09:03.487728  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:09:03.487778  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:09:03.488071  245311 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220701225700-10066 --format={{.State.Status}}
	I0701 23:09:03.534661  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:09:03.535597  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:09:03.539275  245311 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:09:03.539299  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:09:03.539346  245311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220701225700-10066
	I0701 23:09:03.546251  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:09:03.548609  245311 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220701225700-10066" to be "Ready" ...
	I0701 23:09:03.548716  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:09:03.551308  245311 node_ready.go:49] node "old-k8s-version-20220701225700-10066" has status "Ready":"True"
	I0701 23:09:03.551327  245311 node_ready.go:38] duration metric: took 2.690293ms waiting for node "old-k8s-version-20220701225700-10066" to be "Ready" ...
	I0701 23:09:03.551337  245311 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:09:03.556017  245311 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace to be "Ready" ...
	I0701 23:09:03.581704  245311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/old-k8s-version-20220701225700-10066/id_rsa Username:docker}
	I0701 23:09:03.734911  245311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:09:03.735717  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:09:03.735740  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:09:03.735948  245311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:09:03.737730  245311 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:09:03.737747  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:09:03.828527  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:09:03.828554  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:09:03.832137  245311 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:09:03.832158  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:09:03.918104  245311 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:09:03.918134  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:09:03.919243  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:09:03.919297  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:09:03.935452  245311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:09:03.936915  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:09:03.936936  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:09:04.022810  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:09:04.022839  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:09:04.041223  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:09:04.041259  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:09:04.126860  245311 start.go:809] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0701 23:09:04.129586  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:09:04.129609  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:09:04.145754  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:09:04.145824  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:09:04.226029  245311 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:09:04.226067  245311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:09:04.241856  245311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:09:04.735718  245311 addons.go:383] Verifying addon metrics-server=true in "old-k8s-version-20220701225700-10066"
	I0701 23:09:05.228327  245311 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0701 23:09:05.229445  245311 addons.go:414] enableAddons completed in 1.808670394s
	I0701 23:09:05.635863  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:08.066674  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:10.566292  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:12.634786  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:15.066975  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:17.566914  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:20.066552  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:22.067112  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:24.566713  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:27.066332  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:29.066984  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:31.566393  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:34.066231  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:36.066602  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:38.566970  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:41.066175  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:43.066510  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:45.066856  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:47.566973  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:50.066024  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:52.066322  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:54.566820  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:57.066159  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:09:59.066966  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:01.565479  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	cd8a4893d2d37       6fb66cd78abfe       3 minutes ago       Exited              kindnet-cni               3                   b4daedbbfa0f4
	b6c46b43c578c       a634548d10b03       12 minutes ago      Running             kube-proxy                0                   d3671c6594e46
	ac54680228313       5d725196c1f47       12 minutes ago      Running             kube-scheduler            0                   df504f599edde
	9f4bd4048f717       d3377ffb7177c       12 minutes ago      Running             kube-apiserver            0                   7f2c7d420e188
	6af50f79ce840       34cdf99b1bb3b       12 minutes ago      Running             kube-controller-manager   0                   397a5ee302dea
	b90cae4e4b7ea       aebe758cef4cd       12 minutes ago      Running             etcd                      0                   172c2b390191b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 22:57:21 UTC, end at Fri 2022-07-01 23:10:06 UTC. --
	Jul 01 23:03:27 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:27.538580207Z" level=warning msg="cleaning up after shim disconnected" id=3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37 namespace=k8s.io
	Jul 01 23:03:27 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:27.538604058Z" level=info msg="cleaning up dead shim"
	Jul 01 23:03:27 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:27.548145862Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:03:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2810 runtime=io.containerd.runc.v2\n"
	Jul 01 23:03:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:28.140727530Z" level=info msg="RemoveContainer for \"a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d\""
	Jul 01 23:03:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:28.148517115Z" level=info msg="RemoveContainer for \"a277d78bbb6bec9cace8086ab22bf4a57f19419ce64afa2fac270141ae6bbe7d\" returns successfully"
	Jul 01 23:03:43 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:43.455699740Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jul 01 23:03:43 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:43.470747769Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\""
	Jul 01 23:03:43 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:43.471222249Z" level=info msg="StartContainer for \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\""
	Jul 01 23:03:43 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:03:43.620792110Z" level=info msg="StartContainer for \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\" returns successfully"
	Jul 01 23:06:23 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:23.968125385Z" level=info msg="shim disconnected" id=642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34
	Jul 01 23:06:23 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:23.968188273Z" level=warning msg="cleaning up after shim disconnected" id=642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34 namespace=k8s.io
	Jul 01 23:06:23 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:23.968201621Z" level=info msg="cleaning up dead shim"
	Jul 01 23:06:23 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:23.978062703Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:06:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2920 runtime=io.containerd.runc.v2\n"
	Jul 01 23:06:24 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:24.464297829Z" level=info msg="RemoveContainer for \"3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37\""
	Jul 01 23:06:24 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:24.468334690Z" level=info msg="RemoveContainer for \"3df1db606ec0f65fcd1deed89e9040dca419e64d6b6b28ff83e39397caef4d37\" returns successfully"
	Jul 01 23:06:48 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:48.455940723Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jul 01 23:06:48 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:48.467762103Z" level=info msg="CreateContainer within sandbox \"b4daedbbfa0f45311c3e6da958f4916278f256b9a9641a1ff09a6fa60fbd55a0\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602\""
	Jul 01 23:06:48 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:48.468233956Z" level=info msg="StartContainer for \"cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602\""
	Jul 01 23:06:48 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:06:48.538633288Z" level=info msg="StartContainer for \"cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602\" returns successfully"
	Jul 01 23:09:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:28.971114283Z" level=info msg="shim disconnected" id=cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602
	Jul 01 23:09:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:28.971176759Z" level=warning msg="cleaning up after shim disconnected" id=cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602 namespace=k8s.io
	Jul 01 23:09:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:28.971186825Z" level=info msg="cleaning up dead shim"
	Jul 01 23:09:28 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:28.981049968Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:09:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3025 runtime=io.containerd.runc.v2\n"
	Jul 01 23:09:29 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:29.803314781Z" level=info msg="RemoveContainer for \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\""
	Jul 01 23:09:29 no-preload-20220701225718-10066 containerd[517]: time="2022-07-01T23:09:29.807628497Z" level=info msg="RemoveContainer for \"642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220701225718-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220701225718-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=no-preload-20220701225718-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T22_57_50_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 22:57:44 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220701225718-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:10:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:08:30 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:08:30 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:08:30 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:08:30 +0000   Fri, 01 Jul 2022 22:57:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220701225718-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                82dabe3f-d133-4afb-a4d2-ee1450b85ce0
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220701225718-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-b5wkl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-20220701225718-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-20220701225718-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-5ck82                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-20220701225718-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m   node-controller  Node no-preload-20220701225718-10066 event: Registered Node no-preload-20220701225718-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228] <==
	* {"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T22:57:40.722Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T22:57:40.719Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T22:57:40.723Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T22:57:40.723Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-07-01T22:57:45.376Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.245133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:discovery\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2022-07-01T22:57:45.376Z","caller":"traceutil/trace.go:171","msg":"trace[98245775] range","detail":"{range_begin:/registry/clusterroles/system:discovery; range_end:; response_count:0; response_revision:80; }","duration":"100.3626ms","start":"2022-07-01T22:57:45.275Z","end":"2022-07-01T22:57:45.376Z","steps":["trace[98245775] 'agreement among raft nodes before linearized reading'  (duration: 96.832114ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:57:49.038Z","caller":"traceutil/trace.go:171","msg":"trace[1927820922] linearizableReadLoop","detail":"{readStateIndex:259; appliedIndex:259; }","duration":"109.87435ms","start":"2022-07-01T22:57:48.928Z","end":"2022-07-01T22:57:49.038Z","steps":["trace[1927820922] 'read index received'  (duration: 109.866721ms)","trace[1927820922] 'applied index is now lower than readState.Index'  (duration: 6.557µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:49.038Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.030645ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:6186"}
	{"level":"info","ts":"2022-07-01T22:57:49.038Z","caller":"traceutil/trace.go:171","msg":"trace[1867140299] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:252; }","duration":"110.085806ms","start":"2022-07-01T22:57:48.928Z","end":"2022-07-01T22:57:49.038Z","steps":["trace[1867140299] 'agreement among raft nodes before linearized reading'  (duration: 109.986775ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:57:49.448Z","caller":"traceutil/trace.go:171","msg":"trace[2075058343] linearizableReadLoop","detail":"{readStateIndex:261; appliedIndex:261; }","duration":"120.342992ms","start":"2022-07-01T22:57:49.328Z","end":"2022-07-01T22:57:49.448Z","steps":["trace[2075058343] 'read index received'  (duration: 120.337394ms)","trace[2075058343] 'applied index is now lower than readState.Index'  (duration: 4.619µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:49.448Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.51147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:4098"}
	{"level":"info","ts":"2022-07-01T22:57:49.448Z","caller":"traceutil/trace.go:171","msg":"trace[1223968225] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:252; }","duration":"120.565364ms","start":"2022-07-01T22:57:49.328Z","end":"2022-07-01T22:57:49.448Z","steps":["trace[1223968225] 'agreement among raft nodes before linearized reading'  (duration: 120.458386ms)"],"step_count":1}
	{"level":"warn","ts":"2022-07-01T22:57:50.278Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"155.261704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:3094"}
	{"level":"info","ts":"2022-07-01T22:57:50.278Z","caller":"traceutil/trace.go:171","msg":"trace[2055409456] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:258; }","duration":"155.385267ms","start":"2022-07-01T22:57:50.122Z","end":"2022-07-01T22:57:50.278Z","steps":["trace[2055409456] 'agreement among raft nodes before linearized reading'  (duration: 70.598719ms)","trace[2055409456] 'range keys from in-memory index tree'  (duration: 84.618935ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T22:57:50.278Z","caller":"traceutil/trace.go:171","msg":"trace[2103903859] transaction","detail":"{read_only:false; response_revision:259; number_of_response:1; }","duration":"149.94207ms","start":"2022-07-01T22:57:50.128Z","end":"2022-07-01T22:57:50.278Z","steps":["trace[2103903859] 'process raft request'  (duration: 65.293492ms)","trace[2103903859] 'compare'  (duration: 84.545015ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T22:57:51.109Z","caller":"traceutil/trace.go:171","msg":"trace[781945869] linearizableReadLoop","detail":"{readStateIndex:272; appliedIndex:272; }","duration":"174.267022ms","start":"2022-07-01T22:57:50.935Z","end":"2022-07-01T22:57:51.109Z","steps":["trace[781945869] 'read index received'  (duration: 174.257127ms)","trace[781945869] 'applied index is now lower than readState.Index'  (duration: 8.133µs)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:51.175Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"240.517113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-07-01T22:57:51.175Z","caller":"traceutil/trace.go:171","msg":"trace[1713793044] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:262; }","duration":"240.608471ms","start":"2022-07-01T22:57:50.935Z","end":"2022-07-01T22:57:51.175Z","steps":["trace[1713793044] 'agreement among raft nodes before linearized reading'  (duration: 174.376947ms)","trace[1713793044] 'range keys from in-memory index tree'  (duration: 66.10988ms)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T22:57:52.428Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.210117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-no-preload-20220701225718-10066\" ","response":"range_response_count:1 size:4359"}
	{"level":"info","ts":"2022-07-01T22:57:52.428Z","caller":"traceutil/trace.go:171","msg":"trace[322197520] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-no-preload-20220701225718-10066; range_end:; response_count:1; response_revision:265; }","duration":"103.318537ms","start":"2022-07-01T22:57:52.325Z","end":"2022-07-01T22:57:52.428Z","steps":["trace[322197520] 'range keys from in-memory index tree'  (duration: 103.086305ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-01T22:58:35.992Z","caller":"traceutil/trace.go:171","msg":"trace[372511059] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"131.829529ms","start":"2022-07-01T22:58:35.860Z","end":"2022-07-01T22:58:35.992Z","steps":["trace[372511059] 'process raft request'  (duration: 34.207641ms)","trace[372511059] 'compare'  (duration: 97.515253ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T23:07:41.638Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":451}
	{"level":"info","ts":"2022-07-01T23:07:41.639Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":451,"took":"426.131µs"}
	
	* 
	* ==> kernel <==
	*  23:10:06 up 52 min,  0 users,  load average: 0.84, 1.28, 1.80
	Linux no-preload-20220701225718-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012] <==
	* I0701 22:57:44.359693       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 22:57:44.361586       1 cache.go:39] Caches are synced for autoregister controller
	I0701 22:57:44.417898       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0701 22:57:44.418504       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0701 22:57:44.418589       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0701 22:57:44.418642       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0701 22:57:44.418677       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 22:57:44.937777       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 22:57:45.263186       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0701 22:57:45.266534       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0701 22:57:45.266585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 22:57:45.755535       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 22:57:45.789142       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 22:57:45.863518       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0701 22:57:45.869086       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0701 22:57:45.870171       1 controller.go:611] quota admission added evaluator for: endpoints
	I0701 22:57:45.873910       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 22:57:46.404473       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0701 22:57:47.255908       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0701 22:57:47.263186       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0701 22:57:47.272732       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0701 22:57:47.350282       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 22:58:00.132849       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0701 22:58:00.481609       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0701 22:58:01.229093       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462] <==
	* I0701 22:57:59.430004       1 shared_informer.go:262] Caches are synced for endpoint
	I0701 22:57:59.430047       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0701 22:57:59.432355       1 shared_informer.go:262] Caches are synced for job
	I0701 22:57:59.437580       1 shared_informer.go:262] Caches are synced for PV protection
	I0701 22:57:59.523196       1 shared_informer.go:262] Caches are synced for taint
	I0701 22:57:59.523308       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0701 22:57:59.523359       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0701 22:57:59.523407       1 node_lifecycle_controller.go:1014] Missing timestamp for Node no-preload-20220701225718-10066. Assuming now as a timestamp.
	I0701 22:57:59.523479       1 event.go:294] "Event occurred" object="no-preload-20220701225718-10066" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-20220701225718-10066 event: Registered Node no-preload-20220701225718-10066 in Controller"
	I0701 22:57:59.523500       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0701 22:57:59.599326       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0701 22:57:59.611223       1 shared_informer.go:262] Caches are synced for stateful set
	I0701 22:57:59.625707       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 22:57:59.631783       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 22:57:59.679597       1 shared_informer.go:262] Caches are synced for daemon sets
	I0701 22:58:00.099864       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 22:58:00.128120       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 22:58:00.128144       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0701 22:58:00.134791       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0701 22:58:00.470246       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0701 22:58:00.486736       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5ck82"
	I0701 22:58:00.488364       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b5wkl"
	I0701 22:58:00.541423       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jzmvd"
	I0701 22:58:00.547729       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-mbfz4"
	I0701 22:58:00.567152       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-jzmvd"
	
	* 
	* ==> kube-proxy [b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8] <==
	* I0701 22:58:01.121577       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0701 22:58:01.121673       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0701 22:58:01.121706       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 22:58:01.224547       1 server_others.go:206] "Using iptables Proxier"
	I0701 22:58:01.224586       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 22:58:01.224598       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 22:58:01.224617       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 22:58:01.224645       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 22:58:01.224819       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 22:58:01.225041       1 server.go:661] "Version info" version="v1.24.2"
	I0701 22:58:01.225053       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 22:58:01.225770       1 config.go:226] "Starting endpoint slice config controller"
	I0701 22:58:01.225786       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 22:58:01.225872       1 config.go:317] "Starting service config controller"
	I0701 22:58:01.225877       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 22:58:01.226097       1 config.go:444] "Starting node config controller"
	I0701 22:58:01.226102       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 22:58:01.325962       1 shared_informer.go:262] Caches are synced for service config
	I0701 22:58:01.326036       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 22:58:01.326305       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8] <==
	* E0701 22:57:44.348537       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 22:57:44.348542       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:44.349659       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 22:57:44.349704       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 22:57:44.349737       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 22:57:44.349780       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 22:57:45.297253       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 22:57:45.297294       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 22:57:45.344779       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.344819       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:45.359826       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 22:57:45.359853       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 22:57:45.425348       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 22:57:45.425400       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 22:57:45.441898       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 22:57:45.441930       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 22:57:45.447744       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 22:57:45.447773       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 22:57:45.475136       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 22:57:45.475182       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 22:57:45.483371       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.483409       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 22:57:45.598153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 22:57:45.598194       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0701 22:57:47.044759       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 22:57:21 UTC, end at Fri 2022-07-01 23:10:06 UTC. --
	Jul 01 23:08:37 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:37.793805    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:08:42 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:42.795064    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:08:47 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:47.795873    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:08:52 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:52.796480    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:08:57 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:08:57.797443    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:02 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:02.798339    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:07 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:07.799070    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:12 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:12.799969    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:17 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:17.800524    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:22 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:22.801790    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:27 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:27.802792    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:29 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:09:29.802017    1741 scope.go:110] "RemoveContainer" containerID="642f20c5c19f3b5648b97df8fcf7833306ee1888bafb4919e967ed6510feab34"
	Jul 01 23:09:29 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:09:29.802314    1741 scope.go:110] "RemoveContainer" containerID="cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	Jul 01 23:09:29 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:29.802716    1741 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-b5wkl_kube-system(bc770683-78b7-449f-a0af-5a2cc006275c)\"" pod="kube-system/kindnet-b5wkl" podUID=bc770683-78b7-449f-a0af-5a2cc006275c
	Jul 01 23:09:32 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:32.803650    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:37 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:37.804637    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:41 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:09:41.453769    1741 scope.go:110] "RemoveContainer" containerID="cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	Jul 01 23:09:41 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:41.454069    1741 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-b5wkl_kube-system(bc770683-78b7-449f-a0af-5a2cc006275c)\"" pod="kube-system/kindnet-b5wkl" podUID=bc770683-78b7-449f-a0af-5a2cc006275c
	Jul 01 23:09:42 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:42.806008    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:47 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:47.807388    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:52 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:52.808776    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:09:53 no-preload-20220701225718-10066 kubelet[1741]: I0701 23:09:53.453199    1741 scope.go:110] "RemoveContainer" containerID="cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	Jul 01 23:09:53 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:53.453457    1741 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-b5wkl_kube-system(bc770683-78b7-449f-a0af-5a2cc006275c)\"" pod="kube-system/kindnet-b5wkl" podUID=bc770683-78b7-449f-a0af-5a2cc006275c
	Jul 01 23:09:57 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:09:57.809897    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:10:02 no-preload-20220701225718-10066 kubelet[1741]: E0701 23:10:02.811183    1741 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-6d4b75cb6d-mbfz4 storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 describe pod busybox coredns-6d4b75cb6d-mbfz4 storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220701225718-10066 describe pod busybox coredns-6d4b75cb6d-mbfz4 storage-provisioner: exit status 1 (58.647088ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9hcs (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-r9hcs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m49s (x2 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-mbfz4" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220701225718-10066 describe pod busybox coredns-6d4b75cb6d-mbfz4 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (484.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [547a2d49-f791-40d3-92e3-4acacc10b8c2] Pending
helpers_test.go:342: "busybox" [547a2d49-f791-40d3-92e3-4acacc10b8c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0701 23:05:13.509764   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: ***** TestStartStop/group/default-k8s-different-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
start_stop_delete_test.go:196: TestStartStop/group/default-k8s-different-port/serial/DeployApp: showing logs for failed pods as of 2022-07-01 23:13:11.461144232 +0000 UTC m=+2980.766081761
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context default-k8s-different-port-20220701230032-10066 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6qwg6 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-6qwg6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  2m45s (x2 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 logs busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context default-k8s-different-port-20220701230032-10066 logs busybox -n default:
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220701230032-10066
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220701230032-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93",
	        "Created": "2022-07-01T23:00:40.408283404Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T23:00:40.782604309Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hostname",
	        "HostsPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hosts",
	        "LogPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93-json.log",
	        "Name": "/default-k8s-different-port-20220701230032-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220701230032-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220701230032-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220701230032-10066",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220701230032-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220701230032-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b84131f0a443f3e46a27c4a53bbb599561e5894a5499246152418e29a547de10",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b84131f0a443",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220701230032-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "261fd4f89726",
	                        "default-k8s-different-port-20220701230032-10066"
	                    ],
	                    "NetworkID": "08b054338871e09e9987c4187ebe43c21ee49646be113b14ac2205c8647ea77d",
	                    "EndpointID": "dc3e5e6cc3047caf3c0c1415491005074769713a8b3dbbad0e642c61ea3eecd8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
E0701 23:13:11.918179   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220701230032-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | disable-driver-mounts-20220701230032-10066                 |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:02 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC |                     |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |          |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:06 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC |                     |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --preload=false                                |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:10:28
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:10:28.436068  269883 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:10:28.436180  269883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:10:28.436194  269883 out.go:309] Setting ErrFile to fd 2...
	I0701 23:10:28.436201  269883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:10:28.436618  269883 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:10:28.436880  269883 out.go:303] Setting JSON to false
	I0701 23:10:28.438233  269883 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3182,"bootTime":1656713847,"procs":500,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:10:28.438304  269883 start.go:125] virtualization: kvm guest
	I0701 23:10:28.441407  269883 out.go:177] * [no-preload-20220701225718-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:10:28.443028  269883 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:10:28.442993  269883 notify.go:193] Checking for updates...
	I0701 23:10:28.444809  269883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:10:28.446488  269883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:10:28.448125  269883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:10:28.449746  269883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:10:28.451761  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:10:28.452167  269883 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:10:28.493617  269883 docker.go:137] docker version: linux-20.10.17
	I0701 23:10:28.493713  269883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:10:28.600580  269883 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:10:28.523096353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:10:28.600696  269883 docker.go:254] overlay module found
	I0701 23:10:28.603102  269883 out.go:177] * Using the docker driver based on existing profile
	I0701 23:10:28.604609  269883 start.go:284] selected driver: docker
	I0701 23:10:28.604630  269883 start.go:808] validating driver "docker" against &{Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:28.604744  269883 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:10:28.605512  269883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:10:28.710819  269883 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:10:28.635526958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:10:28.711050  269883 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:10:28.711069  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:28.711075  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:28.711086  269883 start_flags.go:310] config:
	{Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:28.713353  269883 out.go:177] * Starting control plane node no-preload-20220701225718-10066 in cluster no-preload-20220701225718-10066
	I0701 23:10:28.714773  269883 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:10:28.716148  269883 out.go:177] * Pulling base image ...
	I0701 23:10:28.717448  269883 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:10:28.717489  269883 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:10:28.717646  269883 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json ...
	I0701 23:10:28.717791  269883 cache.go:107] acquiring lock: {Name:mk3aed9edf4e045130f7a3c6fdc7a324a577ec7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717837  269883 cache.go:107] acquiring lock: {Name:mk8030c0afbd72b38281e129af86f3686df5df89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717878  269883 cache.go:107] acquiring lock: {Name:mk7ec70fd71856cc28acc69a0da3b72748a4420a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717910  269883 cache.go:107] acquiring lock: {Name:mk881497b5d07c75cf2f158738d77e27bd2a369d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717808  269883 cache.go:107] acquiring lock: {Name:mk9ab11f02b498228e877e934d5aaa541b21cbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717960  269883 cache.go:107] acquiring lock: {Name:mk5766c1b843c08c650f7c84836d8506a465b496 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717997  269883 cache.go:107] acquiring lock: {Name:mk3b0e90d77cbe629b1ed14b104838f8ec036785 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.718031  269883 cache.go:107] acquiring lock: {Name:mk72f6f6d64839ffc62747fa568c11250cb4422d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.718093  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 exists
	I0701 23:10:28.718103  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0701 23:10:28.718116  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 exists
	I0701 23:10:28.718122  269883 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 184.555µs
	I0701 23:10:28.718134  269883 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0701 23:10:28.718115  269883 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2" took 279.008µs
	I0701 23:10:28.718142  269883 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 succeeded
	I0701 23:10:28.718093  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 23:10:28.718153  269883 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2" took 281.275µs
	I0701 23:10:28.718166  269883 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 succeeded
	I0701 23:10:28.718164  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 exists
	I0701 23:10:28.718177  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 exists
	I0701 23:10:28.718210  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 exists
	I0701 23:10:28.718216  269883 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2" took 228.65µs
	I0701 23:10:28.718224  269883 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2" took 427.452µs
	I0701 23:10:28.718233  269883 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 succeeded
	I0701 23:10:28.718167  269883 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 391.086µs
	I0701 23:10:28.718242  269883 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 23:10:28.718249  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 exists
	I0701 23:10:28.718229  269883 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 succeeded
	I0701 23:10:28.718187  269883 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0" took 349.266µs
	I0701 23:10:28.718259  269883 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 succeeded
	I0701 23:10:28.718262  269883 cache.go:96] cache image "k8s.gcr.io/pause:3.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7" took 344.995µs
	I0701 23:10:28.718273  269883 cache.go:80] save to tar file k8s.gcr.io/pause:3.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 succeeded
	I0701 23:10:28.718283  269883 cache.go:87] Successfully saved all images to host disk.
	I0701 23:10:28.752422  269883 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:10:28.752465  269883 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:10:28.752487  269883 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:10:28.752528  269883 start.go:352] acquiring machines lock for no-preload-20220701225718-10066: {Name:mk0df5e406dc07f9b5bbaf453954c11d3f5f2a14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.752631  269883 start.go:356] acquired machines lock for "no-preload-20220701225718-10066" in 71.505µs
	I0701 23:10:28.752665  269883 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:10:28.752673  269883 fix.go:55] fixHost starting: 
	I0701 23:10:28.752958  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:10:28.785790  269883 fix.go:103] recreateIfNeeded on no-preload-20220701225718-10066: state=Stopped err=<nil>
	W0701 23:10:28.785828  269883 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:10:28.788364  269883 out.go:177] * Restarting existing docker container for "no-preload-20220701225718-10066" ...
	I0701 23:10:28.066781  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:30.566251  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:28.789918  269883 cli_runner.go:164] Run: docker start no-preload-20220701225718-10066
	I0701 23:10:29.179864  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:10:29.217535  269883 kic.go:416] container "no-preload-20220701225718-10066" state is running.
	I0701 23:10:29.217931  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:29.251855  269883 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json ...
	I0701 23:10:29.252122  269883 machine.go:88] provisioning docker machine ...
	I0701 23:10:29.252152  269883 ubuntu.go:169] provisioning hostname "no-preload-20220701225718-10066"
	I0701 23:10:29.252196  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:29.287524  269883 main.go:134] libmachine: Using SSH client type: native
	I0701 23:10:29.287708  269883 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0701 23:10:29.287733  269883 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220701225718-10066 && echo "no-preload-20220701225718-10066" | sudo tee /etc/hostname
	I0701 23:10:29.288440  269883 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37316->127.0.0.1:49437: read: connection reset by peer
	I0701 23:10:32.419154  269883 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220701225718-10066
	
	I0701 23:10:32.419236  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.453352  269883 main.go:134] libmachine: Using SSH client type: native
	I0701 23:10:32.453538  269883 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0701 23:10:32.453573  269883 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220701225718-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220701225718-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220701225718-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:10:32.570226  269883 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:10:32.570259  269883 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:10:32.570290  269883 ubuntu.go:177] setting up certificates
	I0701 23:10:32.570314  269883 provision.go:83] configureAuth start
	I0701 23:10:32.570364  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:32.604673  269883 provision.go:138] copyHostCerts
	I0701 23:10:32.604741  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:10:32.604764  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:10:32.604850  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:10:32.605244  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:10:32.605267  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:10:32.605317  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:10:32.605447  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:10:32.605456  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:10:32.605493  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:10:32.605552  269883 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220701225718-10066 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220701225718-10066]
	I0701 23:10:32.772605  269883 provision.go:172] copyRemoteCerts
	I0701 23:10:32.772663  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:10:32.772694  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.806036  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:32.889557  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:10:32.906187  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0701 23:10:32.922754  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 23:10:32.939238  269883 provision.go:86] duration metric: configureAuth took 368.908559ms
	I0701 23:10:32.939268  269883 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:10:32.939429  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:10:32.939441  269883 machine.go:91] provisioned docker machine in 3.687302971s
	I0701 23:10:32.939447  269883 start.go:306] post-start starting for "no-preload-20220701225718-10066" (driver="docker")
	I0701 23:10:32.939452  269883 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:10:32.939491  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:10:32.939527  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.975147  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.062201  269883 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:10:33.065814  269883 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:10:33.065840  269883 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:10:33.065854  269883 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:10:33.065866  269883 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:10:33.065885  269883 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:10:33.065955  269883 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:10:33.066065  269883 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:10:33.066201  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:10:33.073331  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:10:33.089719  269883 start.go:309] post-start completed in 150.262783ms
	I0701 23:10:33.089782  269883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:10:33.089819  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.125536  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.210970  269883 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:10:33.214873  269883 fix.go:57] fixHost completed within 4.462195685s
	I0701 23:10:33.214897  269883 start.go:81] releasing machines lock for "no-preload-20220701225718-10066", held for 4.462242204s
	I0701 23:10:33.214986  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:33.248938  269883 ssh_runner.go:195] Run: systemctl --version
	I0701 23:10:33.248978  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.249031  269883 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:10:33.249088  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.285027  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.286024  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.386339  269883 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:10:33.397864  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:10:33.407068  269883 docker.go:179] disabling docker service ...
	I0701 23:10:33.407108  269883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:10:33.416965  269883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:10:33.425446  269883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:10:33.066619  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:35.565978  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:33.498217  269883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:10:33.568864  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:10:33.577568  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:10:33.589825  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:10:33.598932  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:10:33.606840  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:10:33.614425  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:10:33.622221  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:10:33.629559  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:10:33.642101  269883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:10:33.648858  269883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:10:33.655601  269883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:10:33.724238  269883 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:10:33.794793  269883 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:10:33.794860  269883 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:10:33.798329  269883 start.go:471] Will wait 60s for crictl version
	I0701 23:10:33.798381  269883 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:10:33.824964  269883 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:10:33Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:10:38.067922  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:40.566156  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:44.872066  269883 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:10:44.894512  269883 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:10:44.894588  269883 ssh_runner.go:195] Run: containerd --version
	I0701 23:10:44.922163  269883 ssh_runner.go:195] Run: containerd --version
	I0701 23:10:44.951446  269883 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:10:43.066318  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:45.067096  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:47.566083  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:44.952886  269883 cli_runner.go:164] Run: docker network inspect no-preload-20220701225718-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:10:44.987019  269883 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0701 23:10:44.990236  269883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:10:44.999796  269883 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:10:44.999840  269883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:10:45.023088  269883 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:10:45.023106  269883 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:10:45.023142  269883 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:10:45.045429  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:45.045449  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:45.045462  269883 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:10:45.045472  269883 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220701225718-10066 NodeName:no-preload-20220701225718-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:10:45.045591  269883 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220701225718-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:10:45.045663  269883 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220701225718-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 23:10:45.045704  269883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:10:45.052996  269883 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:10:45.053052  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:10:45.059599  269883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0701 23:10:45.073222  269883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:10:45.085371  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0701 23:10:45.097222  269883 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:10:45.099941  269883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:10:45.108409  269883 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066 for IP: 192.168.94.2
	I0701 23:10:45.108501  269883 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:10:45.108550  269883 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:10:45.108623  269883 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.key
	I0701 23:10:45.108682  269883 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key.ad8e880a
	I0701 23:10:45.108742  269883 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key
	I0701 23:10:45.108853  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:10:45.108900  269883 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:10:45.108917  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:10:45.108949  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:10:45.108984  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:10:45.109016  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:10:45.109075  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:10:45.109765  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:10:45.125615  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:10:45.141690  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:10:45.158417  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 23:10:45.174871  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:10:45.191499  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:10:45.207611  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:10:45.223735  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:10:45.240344  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:10:45.256863  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:10:45.273492  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:10:45.289914  269883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:10:45.301974  269883 ssh_runner.go:195] Run: openssl version
	I0701 23:10:45.306905  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:10:45.314511  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.317377  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.317418  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.322125  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:10:45.328948  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:10:45.335846  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.338716  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.338814  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.343304  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:10:45.350375  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:10:45.357390  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.360175  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.360212  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.364513  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:10:45.370833  269883 kubeadm.go:395] StartCluster: {Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:45.370926  269883 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:10:45.370953  269883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:10:45.394911  269883 cri.go:87] found id: "cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	I0701 23:10:45.394940  269883 cri.go:87] found id: "b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8"
	I0701 23:10:45.394947  269883 cri.go:87] found id: "ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8"
	I0701 23:10:45.394953  269883 cri.go:87] found id: "9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012"
	I0701 23:10:45.394959  269883 cri.go:87] found id: "6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462"
	I0701 23:10:45.394966  269883 cri.go:87] found id: "b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228"
	I0701 23:10:45.394971  269883 cri.go:87] found id: ""
	I0701 23:10:45.395004  269883 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:10:45.407224  269883 cri.go:114] JSON = null
	W0701 23:10:45.407274  269883 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:10:45.407316  269883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:10:45.413788  269883 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:10:45.413811  269883 kubeadm.go:626] restartCluster start
	I0701 23:10:45.413848  269883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:10:45.419941  269883 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.420556  269883 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220701225718-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:10:45.420886  269883 kubeconfig.go:127] "no-preload-20220701225718-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:10:45.421418  269883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:10:45.422688  269883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:10:45.428759  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.428807  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.436036  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.636442  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.636498  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.645173  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.836479  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.836560  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.845558  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.036840  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.036996  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.045508  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.236821  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.236886  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.245242  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.436407  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.436476  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.445374  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.636693  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.636776  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.645429  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.836720  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.836780  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.845765  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.037048  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.037122  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.045534  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.236841  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.236919  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.245338  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.436619  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.436682  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.445831  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.637106  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.637177  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.646000  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.836229  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.836305  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.844891  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.036112  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.036194  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.044872  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.237166  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.237244  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.245689  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.437095  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.437163  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:49.567167  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:52.066482  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	W0701 23:10:48.446079  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.446102  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.446147  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.453958  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.453982  269883 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:10:48.453989  269883 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:10:48.454005  269883 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:10:48.454064  269883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:10:48.477691  269883 cri.go:87] found id: "cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	I0701 23:10:48.477710  269883 cri.go:87] found id: "b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8"
	I0701 23:10:48.477717  269883 cri.go:87] found id: "ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8"
	I0701 23:10:48.477722  269883 cri.go:87] found id: "9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012"
	I0701 23:10:48.477728  269883 cri.go:87] found id: "6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462"
	I0701 23:10:48.477734  269883 cri.go:87] found id: "b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228"
	I0701 23:10:48.477740  269883 cri.go:87] found id: ""
	I0701 23:10:48.477744  269883 cri.go:232] Stopping containers: [cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602 b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8 ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8 9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012 6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462 b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228]
	I0701 23:10:48.477788  269883 ssh_runner.go:195] Run: which crictl
	I0701 23:10:48.480366  269883 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602 b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8 ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8 9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012 6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462 b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228
	I0701 23:10:48.505890  269883 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:10:48.515195  269883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:10:48.521761  269883 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 22:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul  1 22:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul  1 22:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul  1 22:57 /etc/kubernetes/scheduler.conf
	
	I0701 23:10:48.521807  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 23:10:48.527978  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 23:10:48.534409  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 23:10:48.540704  269883 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.540749  269883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:10:48.547734  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 23:10:48.555417  269883 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.555456  269883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:10:48.561653  269883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:10:48.568679  269883 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:10:48.568731  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:48.610822  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.481354  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.661389  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.719236  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.825158  269883 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:10:49.825270  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.335318  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.834701  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.846390  269883 api_server.go:71] duration metric: took 1.021235424s to wait for apiserver process to appear ...
	I0701 23:10:50.846420  269883 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:10:50.846431  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:50.846825  269883 api_server.go:256] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0701 23:10:51.347542  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.133900  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 23:10:54.133986  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 23:10:54.347164  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.351414  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:10:54.351438  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:10:54.847723  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.852128  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:10:54.852158  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:10:55.347708  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:55.352265  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0701 23:10:55.358013  269883 api_server.go:140] control plane version: v1.24.2
	I0701 23:10:55.358035  269883 api_server.go:130] duration metric: took 4.511609554s to wait for apiserver health ...
	I0701 23:10:55.358045  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:55.358050  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:55.360161  269883 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:10:54.067513  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:56.566103  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:55.361441  269883 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:10:55.364979  269883 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:10:55.364998  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:10:55.377645  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:10:56.166732  269883 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:10:56.173254  269883 system_pods.go:59] 9 kube-system pods found
	I0701 23:10:56.173284  269883 system_pods.go:61] "coredns-6d4b75cb6d-mbfz4" [2ba91f90-b153-4f32-8309-108f0c8156db] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173292  269883 system_pods.go:61] "etcd-no-preload-20220701225718-10066" [eb03d3be-2878-4ae8-9dfc-5a4fccffca06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:10:56.173300  269883 system_pods.go:61] "kindnet-b5wkl" [bc770683-78b7-449f-a0af-5a2cc006275c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:10:56.173308  269883 system_pods.go:61] "kube-apiserver-no-preload-20220701225718-10066" [83390193-15db-49db-9ca3-065ebded60a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0701 23:10:56.173317  269883 system_pods.go:61] "kube-controller-manager-no-preload-20220701225718-10066" [086fda3b-1ef9-4e42-944f-4c20bbde78b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:10:56.173323  269883 system_pods.go:61] "kube-proxy-5ck82" [1b54a384-18b1-4c4f-84ab-fe3f8d2c3100] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 23:10:56.173328  269883 system_pods.go:61] "kube-scheduler-no-preload-20220701225718-10066" [87e67937-d3d1-47f6-9ee3-cb47460c5a96] Running
	I0701 23:10:56.173334  269883 system_pods.go:61] "metrics-server-5c6f97fb75-hqds8" [8c904dd9-6f61-494f-9ce0-b1e79f7a8f32] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173344  269883 system_pods.go:61] "storage-provisioner" [fb659ca7-b379-4467-bf65-4ae7b8b0b2a9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173348  269883 system_pods.go:74] duration metric: took 6.593831ms to wait for pod list to return data ...
	I0701 23:10:56.173354  269883 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:10:56.175724  269883 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:10:56.175753  269883 node_conditions.go:123] node cpu capacity is 8
	I0701 23:10:56.175768  269883 node_conditions.go:105] duration metric: took 2.40915ms to run NodePressure ...
	I0701 23:10:56.175789  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:56.319373  269883 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:10:56.323915  269883 kubeadm.go:777] kubelet initialised
	I0701 23:10:56.323936  269883 kubeadm.go:778] duration metric: took 4.537399ms waiting for restarted kubelet to initialise ...
	I0701 23:10:56.323943  269883 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:10:56.329062  269883 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" ...
	I0701 23:10:58.335246  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:10:59.066949  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:01.565821  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:00.835256  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:03.334510  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:03.566162  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:05.567345  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:05.835173  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:08.334350  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:08.067094  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:10.565414  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:12.566111  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:10.334372  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:12.335344  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:14.566344  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:17.065785  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:14.834372  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:16.834442  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:19.066060  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:21.066818  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:18.835141  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:21.335290  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:23.066855  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:25.566364  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:27.566761  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:23.835104  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:25.835407  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:28.334721  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:30.066364  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:32.066949  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:30.335044  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:32.834615  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:34.566677  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:37.066514  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:34.834740  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:36.835246  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:39.566283  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:42.066440  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:39.334127  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:41.334784  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:43.335046  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:44.066677  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:46.566155  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:45.335120  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:47.835290  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:49.067006  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:51.566261  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:50.334959  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:52.834501  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:53.566563  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:56.066781  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:55.335057  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:57.335198  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:58.566872  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:01.066349  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:59.835098  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:02.334947  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:03.066917  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:05.567287  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:04.335004  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:06.834679  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:08.067032  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:10.565807  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:12.568198  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:08.834990  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:11.334922  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:15.066443  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:17.066775  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:13.834714  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:16.335295  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:19.066984  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:21.566968  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:18.834428  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:20.834730  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:22.834980  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:24.065886  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:26.066500  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:24.835325  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:26.835428  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:28.566632  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:31.066025  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:29.335427  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:31.834460  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:33.066834  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:35.567919  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:33.835479  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:36.335438  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:38.066289  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:40.066776  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:42.067079  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:38.834828  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:41.335293  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:44.566023  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:47.066323  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:43.834535  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:46.334601  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:48.335004  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:49.567644  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:52.066841  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:50.335143  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:52.834795  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:54.566014  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:57.066882  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:55.334932  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:57.335218  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:59.566811  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:13:02.066846  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:59.834795  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:02.335059  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:03.569236  245311 pod_ready.go:81] duration metric: took 4m0.013189395s waiting for pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace to be "Ready" ...
	E0701 23:13:03.569260  245311 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0701 23:13:03.569268  245311 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zqzmg" in "kube-system" namespace to be "Ready" ...
	I0701 23:13:03.570751  245311 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-zqzmg" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-zqzmg" not found
	I0701 23:13:03.570776  245311 pod_ready.go:81] duration metric: took 1.502466ms waiting for pod "coredns-5644d7b6d9-zqzmg" in "kube-system" namespace to be "Ready" ...
	E0701 23:13:03.570784  245311 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-zqzmg" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-zqzmg" not found
	I0701 23:13:03.570798  245311 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4dnv" in "kube-system" namespace to be "Ready" ...
	I0701 23:13:03.574297  245311 pod_ready.go:92] pod "kube-proxy-g4dnv" in "kube-system" namespace has status "Ready":"True"
	I0701 23:13:03.574311  245311 pod_ready.go:81] duration metric: took 3.503795ms waiting for pod "kube-proxy-g4dnv" in "kube-system" namespace to be "Ready" ...
	I0701 23:13:03.574316  245311 pod_ready.go:38] duration metric: took 4m0.022968207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:13:03.574339  245311 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:13:03.574362  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 23:13:03.574406  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 23:13:03.598098  245311 cri.go:87] found id: "4b47f9926929819e69453420728cf3addca1506481e47f4c869fe0b9ecb1b05a"
	I0701 23:13:03.598124  245311 cri.go:87] found id: ""
	I0701 23:13:03.598132  245311 logs.go:274] 1 containers: [4b47f9926929819e69453420728cf3addca1506481e47f4c869fe0b9ecb1b05a]
	I0701 23:13:03.598183  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.601336  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 23:13:03.601389  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 23:13:03.625070  245311 cri.go:87] found id: "a55adf585063763c3db98dacb0da46e76d68ce38f0c45609857c6b9ffb1a06d0"
	I0701 23:13:03.625097  245311 cri.go:87] found id: ""
	I0701 23:13:03.625102  245311 logs.go:274] 1 containers: [a55adf585063763c3db98dacb0da46e76d68ce38f0c45609857c6b9ffb1a06d0]
	I0701 23:13:03.625138  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.627912  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 23:13:03.627965  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 23:13:03.651643  245311 cri.go:87] found id: "727be1a12a814c58548c78d384ece92824c96d9280a507808fd45ea07b1b46aa"
	I0701 23:13:03.651675  245311 cri.go:87] found id: ""
	I0701 23:13:03.651684  245311 logs.go:274] 1 containers: [727be1a12a814c58548c78d384ece92824c96d9280a507808fd45ea07b1b46aa]
	I0701 23:13:03.651732  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.654620  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 23:13:03.654673  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 23:13:03.676764  245311 cri.go:87] found id: "1f2d3fd9a60d0e2579160255fd9b2b375185ae8aea24eed201bcbb08463b65bb"
	I0701 23:13:03.676791  245311 cri.go:87] found id: ""
	I0701 23:13:03.676798  245311 logs.go:274] 1 containers: [1f2d3fd9a60d0e2579160255fd9b2b375185ae8aea24eed201bcbb08463b65bb]
	I0701 23:13:03.676845  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.679515  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 23:13:03.679568  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 23:13:03.702910  245311 cri.go:87] found id: "e4a74bac196c5e31b3f8c9c449789f1aa8ffc2993ca019c6d2b5a82e315c5223"
	I0701 23:13:03.702934  245311 cri.go:87] found id: ""
	I0701 23:13:03.702942  245311 logs.go:274] 1 containers: [e4a74bac196c5e31b3f8c9c449789f1aa8ffc2993ca019c6d2b5a82e315c5223]
	I0701 23:13:03.702986  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.705769  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 23:13:03.705823  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 23:13:03.728693  245311 cri.go:87] found id: "55d927862045cb2a519e3a468367905baae75074cdda05ae64fb65694f98cc4b"
	I0701 23:13:03.728719  245311 cri.go:87] found id: "3cfac1159d4773d5f176f91fa74733ee9b0045ab10842bd2a217eddf9421471a"
	I0701 23:13:03.728729  245311 cri.go:87] found id: ""
	I0701 23:13:03.728736  245311 logs.go:274] 2 containers: [55d927862045cb2a519e3a468367905baae75074cdda05ae64fb65694f98cc4b 3cfac1159d4773d5f176f91fa74733ee9b0045ab10842bd2a217eddf9421471a]
	I0701 23:13:03.728778  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.731678  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.734368  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 23:13:03.734438  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 23:13:03.756602  245311 cri.go:87] found id: "08c1ea895fea68162f932135dc53d50d05dd9e25682cae5da142785d5e5da6c0"
	I0701 23:13:03.756623  245311 cri.go:87] found id: "ad010e052537e0cab1713f6b0445a1b3134cb1188c3ff2b5eb9f68e9818c8143"
	I0701 23:13:03.756634  245311 cri.go:87] found id: ""
	I0701 23:13:03.756641  245311 logs.go:274] 2 containers: [08c1ea895fea68162f932135dc53d50d05dd9e25682cae5da142785d5e5da6c0 ad010e052537e0cab1713f6b0445a1b3134cb1188c3ff2b5eb9f68e9818c8143]
	I0701 23:13:03.756675  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.759300  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.761790  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 23:13:03.761844  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 23:13:03.783728  245311 cri.go:87] found id: "d9f13d77694b02b2ceed82bd2f28b0172dfc05b9f0d7b457accd742a791866bc"
	I0701 23:13:03.783756  245311 cri.go:87] found id: ""
	I0701 23:13:03.783763  245311 logs.go:274] 1 containers: [d9f13d77694b02b2ceed82bd2f28b0172dfc05b9f0d7b457accd742a791866bc]
	I0701 23:13:03.783792  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.786429  245311 logs.go:123] Gathering logs for kube-apiserver [4b47f9926929819e69453420728cf3addca1506481e47f4c869fe0b9ecb1b05a] ...
	I0701 23:13:03.786449  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b47f9926929819e69453420728cf3addca1506481e47f4c869fe0b9ecb1b05a"
	I0701 23:13:03.829813  245311 logs.go:123] Gathering logs for coredns [727be1a12a814c58548c78d384ece92824c96d9280a507808fd45ea07b1b46aa] ...
	I0701 23:13:03.829834  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 727be1a12a814c58548c78d384ece92824c96d9280a507808fd45ea07b1b46aa"
	I0701 23:13:03.868691  245311 logs.go:123] Gathering logs for kubernetes-dashboard [55d927862045cb2a519e3a468367905baae75074cdda05ae64fb65694f98cc4b] ...
	I0701 23:13:03.868726  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55d927862045cb2a519e3a468367905baae75074cdda05ae64fb65694f98cc4b"
	I0701 23:13:03.892505  245311 logs.go:123] Gathering logs for container status ...
	I0701 23:13:03.892532  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 23:13:03.918299  245311 logs.go:123] Gathering logs for kube-proxy [e4a74bac196c5e31b3f8c9c449789f1aa8ffc2993ca019c6d2b5a82e315c5223] ...
	I0701 23:13:03.918325  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4a74bac196c5e31b3f8c9c449789f1aa8ffc2993ca019c6d2b5a82e315c5223"
	I0701 23:13:03.940698  245311 logs.go:123] Gathering logs for storage-provisioner [08c1ea895fea68162f932135dc53d50d05dd9e25682cae5da142785d5e5da6c0] ...
	I0701 23:13:03.940733  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08c1ea895fea68162f932135dc53d50d05dd9e25682cae5da142785d5e5da6c0"
	I0701 23:13:03.964217  245311 logs.go:123] Gathering logs for etcd [a55adf585063763c3db98dacb0da46e76d68ce38f0c45609857c6b9ffb1a06d0] ...
	I0701 23:13:03.964246  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a55adf585063763c3db98dacb0da46e76d68ce38f0c45609857c6b9ffb1a06d0"
	I0701 23:13:03.998956  245311 logs.go:123] Gathering logs for kube-scheduler [1f2d3fd9a60d0e2579160255fd9b2b375185ae8aea24eed201bcbb08463b65bb] ...
	I0701 23:13:03.998982  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f2d3fd9a60d0e2579160255fd9b2b375185ae8aea24eed201bcbb08463b65bb"
	I0701 23:13:04.028494  245311 logs.go:123] Gathering logs for kube-controller-manager [d9f13d77694b02b2ceed82bd2f28b0172dfc05b9f0d7b457accd742a791866bc] ...
	I0701 23:13:04.028523  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9f13d77694b02b2ceed82bd2f28b0172dfc05b9f0d7b457accd742a791866bc"
	I0701 23:13:04.080296  245311 logs.go:123] Gathering logs for containerd ...
	I0701 23:13:04.080327  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 23:13:04.142861  245311 logs.go:123] Gathering logs for storage-provisioner [ad010e052537e0cab1713f6b0445a1b3134cb1188c3ff2b5eb9f68e9818c8143] ...
	I0701 23:13:04.142890  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad010e052537e0cab1713f6b0445a1b3134cb1188c3ff2b5eb9f68e9818c8143"
	I0701 23:13:04.166954  245311 logs.go:123] Gathering logs for kubelet ...
	I0701 23:13:04.166980  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 23:13:04.192126  245311 logs.go:138] Found kubelet problem: Jul 01 23:03:51 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:03:51.863315     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.192306  245311 logs.go:138] Found kubelet problem: Jul 01 23:03:52 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:03:52.464784     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.193491  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:04 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:04.311457     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.193649  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:05 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:05.489840     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.194081  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:16 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:16.218877     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.195291  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:28 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:28.231400     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.195447  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:39 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:39.219001     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.195602  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:50 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:50.218928     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.195759  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:51 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:51.585326     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.195915  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:02 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:02.218268     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.196081  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:04 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:04.219008     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.196984  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:18 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:18.231346     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.197147  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:30 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:30.222082     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.197299  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:43 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:43.218929     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.197454  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:45 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:45.696518     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.197611  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:56 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:56.219007     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.197762  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:57 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:57.218434     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.197912  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:09 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:09.219015     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.198069  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:11 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:11.218959     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.198215  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:15 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:15.759238     770 pod_workers.go:191] Error syncing pod 0f6b7680-dbef-4a34-8e81-5e9a14db6993 ("kindnet-gmgzk_kube-system(0f6b7680-dbef-4a34-8e81-5e9a14db6993)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 10s restarting failed container=kindnet-cni pod=kindnet-gmgzk_kube-system(0f6b7680-dbef-4a34-8e81-5e9a14db6993)"
	W0701 23:13:04.198377  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:23 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:23.218264     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.198565  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:25 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:25.219000     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.198807  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:38 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:38.219041     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.199714  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:51 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:51.258634     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.199868  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:06 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:06.219190     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.200019  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:06 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:06.873041     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.200167  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:19 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:19.218858     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.200323  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:19 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:19.219602     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.200470  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:30 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:30.218243     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.200621  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:31 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:31.219172     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.200776  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:42 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:42.218206     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.200928  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:43 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:43.218901     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.201122  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:55 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:55.218250     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.201303  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:56 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:56.218823     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.201462  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:09 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:09.218748     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.201615  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:10 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:10.218798     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.201771  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:21 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:21.219015     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.201955  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:22 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:22.218294     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.202114  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:34 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:34.218766     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.239697  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:03 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:03.247569    4764 pod_workers.go:191] Error syncing pod 4a3b2350-c424-4b64-bc26-464c6485c295 ("coredns-5644d7b6d9-zqzmg_kube-system(4a3b2350-c424-4b64-bc26-464c6485c295)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\""
	W0701 23:13:04.242261  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:14 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:14.473432    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.242417  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:15 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:15.457415    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.242602  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:17 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:17.463867    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.242789  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:18 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:18.466920    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.243021  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:19 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:19.468338    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.243989  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:29 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:29.269820    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.244166  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:34 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:34.496554    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.244331  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:38 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:38.196082    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.244502  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:41 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:41.256927    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.244679  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:49 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:49.256485    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.245579  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:55 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:55.337448    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.245750  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:01 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:01.550176    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.245912  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:06 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:06.256928    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.246059  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:06 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:06.562670    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.246227  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:08 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:08.195936    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.246388  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:12 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:12.577430    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.246578  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:18 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:18.253487    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.246737  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:20 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:20.257021    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.246910  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:22 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:22.256362    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.247063  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:33 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:33.256950    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.247225  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:35 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:35.256288    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.248138  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:46 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:46.357751    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.248313  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:48 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:48.645508    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.248486  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:50 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:50.653652    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.248658  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:58 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:58.196029    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.248821  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:59 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:59.256718    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.248991  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:00 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:00.675487    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.249141  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:06 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:06.256399    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.249303  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:08 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:08.253683    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.249467  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:09 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:09.256386    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.249623  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:10 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:10.257019    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.249795  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:20 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:20.256357    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.249957  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:20 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:20.256845    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.250117  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:21 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:21.256771    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.250280  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:31 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:31.256281    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.250433  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:35 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:35.257030    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.250621  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:44 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:44.256624    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.250774  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:47 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:47.763173    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.250939  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:49 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:49.256705    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.251103  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:58 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:58.257081    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.251258  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:00 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:00.256251    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.251418  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:02 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:02.256992    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.251580  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:05 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:05.796542    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.251751  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:08 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:08.253712    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.251928  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:11 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:11.810210    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.252099  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:12 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:12.256324    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.252267  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:18 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:18.195937    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.253211  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:18 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:18.614311    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.253381  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:23 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:23.256290    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.253531  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:26 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:26.256319    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.253701  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:32 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:32.256483    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.253855  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:32 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:32.257144    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.254028  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:37 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:37.256349    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.254182  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:45 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:45.256839    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.254353  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:47 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:47.256227    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.254513  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:59 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:59.256850    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.254694  245311 logs.go:138] Found kubelet problem: Jul 01 23:13:00 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:13:00.256349    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	I0701 23:13:04.254706  245311 logs.go:123] Gathering logs for dmesg ...
	I0701 23:13:04.254717  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 23:13:04.271353  245311 logs.go:123] Gathering logs for describe nodes ...
	I0701 23:13:04.271380  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 23:13:04.359204  245311 logs.go:123] Gathering logs for kubernetes-dashboard [3cfac1159d4773d5f176f91fa74733ee9b0045ab10842bd2a217eddf9421471a] ...
	I0701 23:13:04.359237  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cfac1159d4773d5f176f91fa74733ee9b0045ab10842bd2a217eddf9421471a"
	I0701 23:13:04.383261  245311 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:04.383286  245311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 23:13:04.383393  245311 out.go:239] X Problems detected in kubelet:
	W0701 23:13:04.383406  245311 out.go:239]   Jul 01 23:12:37 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:37.256349    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.383429  245311 out.go:239]   Jul 01 23:12:45 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:45.256839    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.383440  245311 out.go:239]   Jul 01 23:12:47 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:47.256227    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.383448  245311 out.go:239]   Jul 01 23:12:59 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:59.256850    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.383460  245311 out.go:239]   Jul 01 23:13:00 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:13:00.256349    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	I0701 23:13:04.383466  245311 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:04.383475  245311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:04.834809  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:06.835212  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e85724e2f1379       6fb66cd78abfe       3 minutes ago       Exited              kindnet-cni               3                   4386b73a1f791
	b63fba32c68cc       a634548d10b03       12 minutes ago      Running             kube-proxy                0                   8996b4de19f2f
	50e0bf3dbb8c1       aebe758cef4cd       12 minutes ago      Running             etcd                      0                   0e8080292fce1
	f41d2b7f1a0c9       34cdf99b1bb3b       12 minutes ago      Running             kube-controller-manager   0                   42cd5575a78ac
	a349e45d95bb6       d3377ffb7177c       12 minutes ago      Running             kube-apiserver            0                   b016974955465
	042166814f4c8       5d725196c1f47       12 minutes ago      Running             kube-scheduler            0                   a077b2a3977f0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 23:00:41 UTC, end at Fri 2022-07-01 23:13:12 UTC. --
	Jul 01 23:06:29 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:29.878015509Z" level=warning msg="cleaning up after shim disconnected" id=cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452 namespace=k8s.io
	Jul 01 23:06:29 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:29.878034759Z" level=info msg="cleaning up dead shim"
	Jul 01 23:06:29 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:29.887840768Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:06:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2461 runtime=io.containerd.runc.v2\n"
	Jul 01 23:06:30 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:30.628255801Z" level=info msg="RemoveContainer for \"c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a\""
	Jul 01 23:06:30 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:30.632295226Z" level=info msg="RemoveContainer for \"c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a\" returns successfully"
	Jul 01 23:06:42 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:42.955995884Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jul 01 23:06:42 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:42.968659548Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\""
	Jul 01 23:06:42 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:42.969110368Z" level=info msg="StartContainer for \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\""
	Jul 01 23:06:43 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:43.035389439Z" level=info msg="StartContainer for \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\" returns successfully"
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.467513671Z" level=info msg="shim disconnected" id=054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.467581950Z" level=warning msg="cleaning up after shim disconnected" id=054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b namespace=k8s.io
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.467597827Z" level=info msg="cleaning up dead shim"
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.477093004Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:09:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n"
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.948165580Z" level=info msg="RemoveContainer for \"cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452\""
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.952191227Z" level=info msg="RemoveContainer for \"cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452\" returns successfully"
	Jul 01 23:09:46 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:46.956607181Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jul 01 23:09:46 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:46.969040688Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e\""
	Jul 01 23:09:46 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:46.969484685Z" level=info msg="StartContainer for \"e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e\""
	Jul 01 23:09:47 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:47.221441787Z" level=info msg="StartContainer for \"e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e\" returns successfully"
	Jul 01 23:12:27 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:27.639406039Z" level=info msg="shim disconnected" id=e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e
	Jul 01 23:12:27 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:27.639466172Z" level=warning msg="cleaning up after shim disconnected" id=e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e namespace=k8s.io
	Jul 01 23:12:27 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:27.639479354Z" level=info msg="cleaning up dead shim"
	Jul 01 23:12:27 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:27.648670720Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:12:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2674 runtime=io.containerd.runc.v2\n"
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:28.285076104Z" level=info msg="RemoveContainer for \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\""
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:28.291561861Z" level=info msg="RemoveContainer for \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220701230032-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220701230032-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T23_00_55_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 23:00:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220701230032-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:13:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:11:17 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:11:17 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:11:17 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:11:17 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220701230032-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                674fca36-2ebb-426c-b65b-bd78bdb510f5
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220701230032-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-49h72                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220701230032-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220701230032-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-qg5j2                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220701230032-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m   node-controller  Node default-k8s-different-port-20220701230032-10066 event: Registered Node default-k8s-different-port-20220701230032-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52] <==
	* {"level":"info","ts":"2022-07-01T23:00:48.724Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220701230032-10066 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.355Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-01T23:00:49.355Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-07-01T23:05:43.769Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"187.454985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-07-01T23:05:43.769Z","caller":"traceutil/trace.go:171","msg":"trace[70078069] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:450; }","duration":"187.563797ms","start":"2022-07-01T23:05:43.582Z","end":"2022-07-01T23:05:43.769Z","steps":["trace[70078069] 'agreement among raft nodes before linearized reading'  (duration: 92.830553ms)","trace[70078069] 'range keys from in-memory index tree'  (duration: 94.58687ms)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T23:05:43.769Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.015467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-07-01T23:05:43.769Z","caller":"traceutil/trace.go:171","msg":"trace[192464171] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:450; }","duration":"120.215201ms","start":"2022-07-01T23:05:43.649Z","end":"2022-07-01T23:05:43.769Z","steps":["trace[192464171] 'agreement among raft nodes before linearized reading'  (duration: 25.42883ms)","trace[192464171] 'range keys from in-memory index tree'  (duration: 94.565934ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T23:10:49.661Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":451}
	{"level":"info","ts":"2022-07-01T23:10:49.661Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":451,"took":"481.342µs"}
	
	* 
	* ==> kernel <==
	*  23:13:12 up 55 min,  0 users,  load average: 0.50, 0.89, 1.55
	Linux default-k8s-different-port-20220701230032-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2] <==
	* I0701 23:00:52.122528       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 23:00:52.122739       1 cache.go:39] Caches are synced for autoregister controller
	I0701 23:00:52.122766       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0701 23:00:52.126063       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0701 23:00:52.126674       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0701 23:00:52.138004       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 23:00:52.142988       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0701 23:00:52.766811       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 23:00:53.027362       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0701 23:00:53.030604       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0701 23:00:53.030622       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 23:00:53.389140       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 23:00:53.433846       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 23:00:53.563524       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0701 23:00:53.568204       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0701 23:00:53.569200       1 controller.go:611] quota admission added evaluator for: endpoints
	I0701 23:00:53.572998       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 23:00:54.150574       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0701 23:00:54.803526       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0701 23:00:54.810011       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0701 23:00:54.817885       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0701 23:00:54.923302       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 23:01:07.657482       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0701 23:01:07.805142       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0701 23:01:08.440379       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c] <==
	* I0701 23:01:06.998693       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0701 23:01:06.999829       1 shared_informer.go:262] Caches are synced for service account
	I0701 23:01:07.000984       1 shared_informer.go:262] Caches are synced for PV protection
	I0701 23:01:07.002759       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0701 23:01:07.010484       1 shared_informer.go:262] Caches are synced for expand
	I0701 23:01:07.021740       1 shared_informer.go:262] Caches are synced for stateful set
	I0701 23:01:07.048510       1 shared_informer.go:262] Caches are synced for disruption
	I0701 23:01:07.048533       1 disruption.go:371] Sending events to api server.
	I0701 23:01:07.050704       1 shared_informer.go:262] Caches are synced for daemon sets
	I0701 23:01:07.154573       1 shared_informer.go:262] Caches are synced for attach detach
	I0701 23:01:07.172619       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0701 23:01:07.198444       1 shared_informer.go:262] Caches are synced for endpoint
	I0701 23:01:07.207329       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 23:01:07.226645       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 23:01:07.255294       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0701 23:01:07.625587       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 23:01:07.659581       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0701 23:01:07.683577       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 23:01:07.683598       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0701 23:01:07.810761       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-49h72"
	I0701 23:01:07.812355       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qg5j2"
	I0701 23:01:08.007720       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-j7d7h"
	I0701 23:01:08.013547       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zmnqs"
	I0701 23:01:08.206059       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0701 23:01:08.211257       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-j7d7h"
	
	* 
	* ==> kube-proxy [b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7] <==
	* I0701 23:01:08.413673       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0701 23:01:08.413740       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0701 23:01:08.413778       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 23:01:08.436458       1 server_others.go:206] "Using iptables Proxier"
	I0701 23:01:08.436499       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 23:01:08.436509       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 23:01:08.436529       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 23:01:08.436562       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:01:08.436755       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:01:08.437083       1 server.go:661] "Version info" version="v1.24.2"
	I0701 23:01:08.437106       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 23:01:08.437672       1 config.go:226] "Starting endpoint slice config controller"
	I0701 23:01:08.437701       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 23:01:08.438341       1 config.go:317] "Starting service config controller"
	I0701 23:01:08.438370       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 23:01:08.438585       1 config.go:444] "Starting node config controller"
	I0701 23:01:08.438745       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 23:01:08.538588       1 shared_informer.go:262] Caches are synced for service config
	I0701 23:01:08.538607       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 23:01:08.539109       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c] <==
	* W0701 23:00:52.118488       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:52.119392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:52.119370       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 23:00:52.119451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 23:00:52.119361       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:52.119469       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:52.118588       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 23:00:52.119594       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 23:00:52.965924       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 23:00:52.965973       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 23:00:52.981058       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 23:00:52.981091       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 23:00:53.008284       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:00:53.008567       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 23:00:53.024485       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 23:00:53.024517       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 23:00:53.118081       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 23:00:53.118128       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 23:00:53.118261       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 23:00:53.118301       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 23:00:53.171211       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:53.171246       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:53.218071       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 23:00:53.218112       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0701 23:00:55.254285       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 23:00:41 UTC, end at Fri 2022-07-01 23:13:12 UTC. --
	Jul 01 23:11:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:11:55.332151    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:00 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:00.333609    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:05 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:05.335006    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:10 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:10.336509    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:15 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:15.337730    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:20 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:20.339305    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:25 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:25.340888    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:12:28.283925    1325 scope.go:110] "RemoveContainer" containerID="054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b"
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:12:28.284085    1325 scope.go:110] "RemoveContainer" containerID="e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:28.284439    1325 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-49h72_kube-system(bee4a070-eb2f-45af-a824-f8ebb08e21cb)\"" pod="kube-system/kindnet-49h72" podUID=bee4a070-eb2f-45af-a824-f8ebb08e21cb
	Jul 01 23:12:30 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:30.341675    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:35 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:35.342607    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:40 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:40.343685    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:41 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:12:41.953946    1325 scope.go:110] "RemoveContainer" containerID="e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	Jul 01 23:12:41 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:41.954231    1325 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-49h72_kube-system(bee4a070-eb2f-45af-a824-f8ebb08e21cb)\"" pod="kube-system/kindnet-49h72" podUID=bee4a070-eb2f-45af-a824-f8ebb08e21cb
	Jul 01 23:12:45 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:45.344251    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:50 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:50.345243    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:55.346082    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:12:55.953939    1325 scope.go:110] "RemoveContainer" containerID="e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	Jul 01 23:12:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:55.954220    1325 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-49h72_kube-system(bee4a070-eb2f-45af-a824-f8ebb08e21cb)\"" pod="kube-system/kindnet-49h72" podUID=bee4a070-eb2f-45af-a824-f8ebb08e21cb
	Jul 01 23:13:00 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:13:00.347391    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:13:05 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:13:05.347977    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:13:06 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:13:06.953864    1325 scope.go:110] "RemoveContainer" containerID="e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	Jul 01 23:13:06 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:13:06.954134    1325 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-49h72_kube-system(bee4a070-eb2f-45af-a824-f8ebb08e21cb)\"" pod="kube-system/kindnet-49h72" podUID=bee4a070-eb2f-45af-a824-f8ebb08e21cb
	Jul 01 23:13:10 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:13:10.348740    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-6d4b75cb6d-zmnqs storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 describe pod busybox coredns-6d4b75cb6d-zmnqs storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod busybox coredns-6d4b75cb6d-zmnqs storage-provisioner: exit status 1 (60.022188ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6qwg6 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-6qwg6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m47s (x2 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-zmnqs" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod busybox coredns-6d4b75cb6d-zmnqs storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220701230032-10066
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220701230032-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93",
	        "Created": "2022-07-01T23:00:40.408283404Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T23:00:40.782604309Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hostname",
	        "HostsPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hosts",
	        "LogPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93-json.log",
	        "Name": "/default-k8s-different-port-20220701230032-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220701230032-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220701230032-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220701230032-10066",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220701230032-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220701230032-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b84131f0a443f3e46a27c4a53bbb599561e5894a5499246152418e29a547de10",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b84131f0a443",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220701230032-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "261fd4f89726",
	                        "default-k8s-different-port-20220701230032-10066"
	                    ],
	                    "NetworkID": "08b054338871e09e9987c4187ebe43c21ee49646be113b14ac2205c8647ea77d",
	                    "EndpointID": "dc3e5e6cc3047caf3c0c1415491005074769713a8b3dbbad0e642c61ea3eecd8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220701230032-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC | 01 Jul 22 23:00 UTC |
	|         | disable-driver-mounts-20220701230032-10066                 |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:00 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:02 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:02 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC | 01 Jul 22 23:03 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:03 UTC |                     |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |          |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:06 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC |                     |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --preload=false                                |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:10:28
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:10:28.436068  269883 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:10:28.436180  269883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:10:28.436194  269883 out.go:309] Setting ErrFile to fd 2...
	I0701 23:10:28.436201  269883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:10:28.436618  269883 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:10:28.436880  269883 out.go:303] Setting JSON to false
	I0701 23:10:28.438233  269883 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3182,"bootTime":1656713847,"procs":500,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:10:28.438304  269883 start.go:125] virtualization: kvm guest
	I0701 23:10:28.441407  269883 out.go:177] * [no-preload-20220701225718-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:10:28.443028  269883 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:10:28.442993  269883 notify.go:193] Checking for updates...
	I0701 23:10:28.444809  269883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:10:28.446488  269883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:10:28.448125  269883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:10:28.449746  269883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:10:28.451761  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:10:28.452167  269883 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:10:28.493617  269883 docker.go:137] docker version: linux-20.10.17
	I0701 23:10:28.493713  269883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:10:28.600580  269883 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:10:28.523096353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:10:28.600696  269883 docker.go:254] overlay module found
	I0701 23:10:28.603102  269883 out.go:177] * Using the docker driver based on existing profile
	I0701 23:10:28.604609  269883 start.go:284] selected driver: docker
	I0701 23:10:28.604630  269883 start.go:808] validating driver "docker" against &{Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:28.604744  269883 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:10:28.605512  269883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:10:28.710819  269883 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:10:28.635526958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:10:28.711050  269883 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:10:28.711069  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:28.711075  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:28.711086  269883 start_flags.go:310] config:
	{Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:28.713353  269883 out.go:177] * Starting control plane node no-preload-20220701225718-10066 in cluster no-preload-20220701225718-10066
	I0701 23:10:28.714773  269883 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:10:28.716148  269883 out.go:177] * Pulling base image ...
	I0701 23:10:28.717448  269883 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:10:28.717489  269883 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:10:28.717646  269883 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json ...
	I0701 23:10:28.717791  269883 cache.go:107] acquiring lock: {Name:mk3aed9edf4e045130f7a3c6fdc7a324a577ec7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717837  269883 cache.go:107] acquiring lock: {Name:mk8030c0afbd72b38281e129af86f3686df5df89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717878  269883 cache.go:107] acquiring lock: {Name:mk7ec70fd71856cc28acc69a0da3b72748a4420a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717910  269883 cache.go:107] acquiring lock: {Name:mk881497b5d07c75cf2f158738d77e27bd2a369d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717808  269883 cache.go:107] acquiring lock: {Name:mk9ab11f02b498228e877e934d5aaa541b21cbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717960  269883 cache.go:107] acquiring lock: {Name:mk5766c1b843c08c650f7c84836d8506a465b496 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717997  269883 cache.go:107] acquiring lock: {Name:mk3b0e90d77cbe629b1ed14b104838f8ec036785 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.718031  269883 cache.go:107] acquiring lock: {Name:mk72f6f6d64839ffc62747fa568c11250cb4422d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.718093  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 exists
	I0701 23:10:28.718103  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0701 23:10:28.718116  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 exists
	I0701 23:10:28.718122  269883 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 184.555µs
	I0701 23:10:28.718134  269883 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0701 23:10:28.718115  269883 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2" took 279.008µs
	I0701 23:10:28.718142  269883 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 succeeded
	I0701 23:10:28.718093  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 23:10:28.718153  269883 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2" took 281.275µs
	I0701 23:10:28.718166  269883 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 succeeded
	I0701 23:10:28.718164  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 exists
	I0701 23:10:28.718177  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 exists
	I0701 23:10:28.718210  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 exists
	I0701 23:10:28.718216  269883 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2" took 228.65µs
	I0701 23:10:28.718224  269883 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2" took 427.452µs
	I0701 23:10:28.718233  269883 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 succeeded
	I0701 23:10:28.718167  269883 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 391.086µs
	I0701 23:10:28.718242  269883 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 23:10:28.718249  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 exists
	I0701 23:10:28.718229  269883 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 succeeded
	I0701 23:10:28.718187  269883 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0" took 349.266µs
	I0701 23:10:28.718259  269883 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 succeeded
	I0701 23:10:28.718262  269883 cache.go:96] cache image "k8s.gcr.io/pause:3.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7" took 344.995µs
	I0701 23:10:28.718273  269883 cache.go:80] save to tar file k8s.gcr.io/pause:3.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 succeeded
	I0701 23:10:28.718283  269883 cache.go:87] Successfully saved all images to host disk.
	I0701 23:10:28.752422  269883 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:10:28.752465  269883 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:10:28.752487  269883 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:10:28.752528  269883 start.go:352] acquiring machines lock for no-preload-20220701225718-10066: {Name:mk0df5e406dc07f9b5bbaf453954c11d3f5f2a14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.752631  269883 start.go:356] acquired machines lock for "no-preload-20220701225718-10066" in 71.505µs
	I0701 23:10:28.752665  269883 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:10:28.752673  269883 fix.go:55] fixHost starting: 
	I0701 23:10:28.752958  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:10:28.785790  269883 fix.go:103] recreateIfNeeded on no-preload-20220701225718-10066: state=Stopped err=<nil>
	W0701 23:10:28.785828  269883 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:10:28.788364  269883 out.go:177] * Restarting existing docker container for "no-preload-20220701225718-10066" ...
	I0701 23:10:28.066781  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:30.566251  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:28.789918  269883 cli_runner.go:164] Run: docker start no-preload-20220701225718-10066
	I0701 23:10:29.179864  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:10:29.217535  269883 kic.go:416] container "no-preload-20220701225718-10066" state is running.
	I0701 23:10:29.217931  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:29.251855  269883 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json ...
	I0701 23:10:29.252122  269883 machine.go:88] provisioning docker machine ...
	I0701 23:10:29.252152  269883 ubuntu.go:169] provisioning hostname "no-preload-20220701225718-10066"
	I0701 23:10:29.252196  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:29.287524  269883 main.go:134] libmachine: Using SSH client type: native
	I0701 23:10:29.287708  269883 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0701 23:10:29.287733  269883 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220701225718-10066 && echo "no-preload-20220701225718-10066" | sudo tee /etc/hostname
	I0701 23:10:29.288440  269883 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37316->127.0.0.1:49437: read: connection reset by peer
	I0701 23:10:32.419154  269883 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220701225718-10066
	
	I0701 23:10:32.419236  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.453352  269883 main.go:134] libmachine: Using SSH client type: native
	I0701 23:10:32.453538  269883 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0701 23:10:32.453573  269883 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220701225718-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220701225718-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220701225718-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:10:32.570226  269883 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:10:32.570259  269883 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:10:32.570290  269883 ubuntu.go:177] setting up certificates
	I0701 23:10:32.570314  269883 provision.go:83] configureAuth start
	I0701 23:10:32.570364  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:32.604673  269883 provision.go:138] copyHostCerts
	I0701 23:10:32.604741  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:10:32.604764  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:10:32.604850  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:10:32.605244  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:10:32.605267  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:10:32.605317  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:10:32.605447  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:10:32.605456  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:10:32.605493  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:10:32.605552  269883 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220701225718-10066 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220701225718-10066]
	I0701 23:10:32.772605  269883 provision.go:172] copyRemoteCerts
	I0701 23:10:32.772663  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:10:32.772694  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.806036  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:32.889557  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:10:32.906187  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0701 23:10:32.922754  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 23:10:32.939238  269883 provision.go:86] duration metric: configureAuth took 368.908559ms
	I0701 23:10:32.939268  269883 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:10:32.939429  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:10:32.939441  269883 machine.go:91] provisioned docker machine in 3.687302971s
	I0701 23:10:32.939447  269883 start.go:306] post-start starting for "no-preload-20220701225718-10066" (driver="docker")
	I0701 23:10:32.939452  269883 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:10:32.939491  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:10:32.939527  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.975147  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.062201  269883 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:10:33.065814  269883 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:10:33.065840  269883 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:10:33.065854  269883 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:10:33.065866  269883 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:10:33.065885  269883 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:10:33.065955  269883 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:10:33.066065  269883 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:10:33.066201  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:10:33.073331  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:10:33.089719  269883 start.go:309] post-start completed in 150.262783ms
	I0701 23:10:33.089782  269883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:10:33.089819  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.125536  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.210970  269883 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:10:33.214873  269883 fix.go:57] fixHost completed within 4.462195685s
	I0701 23:10:33.214897  269883 start.go:81] releasing machines lock for "no-preload-20220701225718-10066", held for 4.462242204s
	I0701 23:10:33.214986  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:33.248938  269883 ssh_runner.go:195] Run: systemctl --version
	I0701 23:10:33.248978  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.249031  269883 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:10:33.249088  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.285027  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.286024  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.386339  269883 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:10:33.397864  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:10:33.407068  269883 docker.go:179] disabling docker service ...
	I0701 23:10:33.407108  269883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:10:33.416965  269883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:10:33.425446  269883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:10:33.066619  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:35.565978  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:33.498217  269883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:10:33.568864  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:10:33.577568  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:10:33.589825  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:10:33.598932  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:10:33.606840  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:10:33.614425  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:10:33.622221  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:10:33.629559  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:10:33.642101  269883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:10:33.648858  269883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:10:33.655601  269883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:10:33.724238  269883 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:10:33.794793  269883 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:10:33.794860  269883 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:10:33.798329  269883 start.go:471] Will wait 60s for crictl version
	I0701 23:10:33.798381  269883 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:10:33.824964  269883 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:10:33Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:10:38.067922  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:40.566156  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:44.872066  269883 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:10:44.894512  269883 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:10:44.894588  269883 ssh_runner.go:195] Run: containerd --version
	I0701 23:10:44.922163  269883 ssh_runner.go:195] Run: containerd --version
	I0701 23:10:44.951446  269883 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:10:43.066318  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:45.067096  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:47.566083  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:44.952886  269883 cli_runner.go:164] Run: docker network inspect no-preload-20220701225718-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:10:44.987019  269883 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0701 23:10:44.990236  269883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:10:44.999796  269883 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:10:44.999840  269883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:10:45.023088  269883 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:10:45.023106  269883 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:10:45.023142  269883 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:10:45.045429  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:45.045449  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:45.045462  269883 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:10:45.045472  269883 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220701225718-10066 NodeName:no-preload-20220701225718-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:10:45.045591  269883 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220701225718-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:10:45.045663  269883 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220701225718-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 23:10:45.045704  269883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:10:45.052996  269883 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:10:45.053052  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:10:45.059599  269883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0701 23:10:45.073222  269883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:10:45.085371  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0701 23:10:45.097222  269883 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:10:45.099941  269883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:10:45.108409  269883 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066 for IP: 192.168.94.2
	I0701 23:10:45.108501  269883 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:10:45.108550  269883 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:10:45.108623  269883 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.key
	I0701 23:10:45.108682  269883 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key.ad8e880a
	I0701 23:10:45.108742  269883 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key
	I0701 23:10:45.108853  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:10:45.108900  269883 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:10:45.108917  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:10:45.108949  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:10:45.108984  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:10:45.109016  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:10:45.109075  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:10:45.109765  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:10:45.125615  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:10:45.141690  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:10:45.158417  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 23:10:45.174871  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:10:45.191499  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:10:45.207611  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:10:45.223735  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:10:45.240344  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:10:45.256863  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:10:45.273492  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:10:45.289914  269883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:10:45.301974  269883 ssh_runner.go:195] Run: openssl version
	I0701 23:10:45.306905  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:10:45.314511  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.317377  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.317418  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.322125  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:10:45.328948  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:10:45.335846  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.338716  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.338814  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.343304  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:10:45.350375  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:10:45.357390  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.360175  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.360212  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.364513  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:10:45.370833  269883 kubeadm.go:395] StartCluster: {Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:45.370926  269883 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:10:45.370953  269883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:10:45.394911  269883 cri.go:87] found id: "cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	I0701 23:10:45.394940  269883 cri.go:87] found id: "b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8"
	I0701 23:10:45.394947  269883 cri.go:87] found id: "ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8"
	I0701 23:10:45.394953  269883 cri.go:87] found id: "9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012"
	I0701 23:10:45.394959  269883 cri.go:87] found id: "6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462"
	I0701 23:10:45.394966  269883 cri.go:87] found id: "b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228"
	I0701 23:10:45.394971  269883 cri.go:87] found id: ""
	I0701 23:10:45.395004  269883 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:10:45.407224  269883 cri.go:114] JSON = null
	W0701 23:10:45.407274  269883 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:10:45.407316  269883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:10:45.413788  269883 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:10:45.413811  269883 kubeadm.go:626] restartCluster start
	I0701 23:10:45.413848  269883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:10:45.419941  269883 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.420556  269883 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220701225718-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:10:45.420886  269883 kubeconfig.go:127] "no-preload-20220701225718-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:10:45.421418  269883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:10:45.422688  269883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:10:45.428759  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.428807  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.436036  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.636442  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.636498  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.645173  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.836479  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.836560  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.845558  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.036840  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.036996  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.045508  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.236821  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.236886  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.245242  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.436407  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.436476  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.445374  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.636693  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.636776  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.645429  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.836720  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.836780  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.845765  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.037048  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.037122  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.045534  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.236841  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.236919  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.245338  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.436619  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.436682  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.445831  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.637106  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.637177  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.646000  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.836229  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.836305  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.844891  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.036112  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.036194  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.044872  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.237166  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.237244  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.245689  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.437095  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.437163  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:49.567167  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:52.066482  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	W0701 23:10:48.446079  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.446102  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.446147  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.453958  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.453982  269883 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:10:48.453989  269883 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:10:48.454005  269883 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:10:48.454064  269883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:10:48.477691  269883 cri.go:87] found id: "cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	I0701 23:10:48.477710  269883 cri.go:87] found id: "b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8"
	I0701 23:10:48.477717  269883 cri.go:87] found id: "ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8"
	I0701 23:10:48.477722  269883 cri.go:87] found id: "9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012"
	I0701 23:10:48.477728  269883 cri.go:87] found id: "6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462"
	I0701 23:10:48.477734  269883 cri.go:87] found id: "b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228"
	I0701 23:10:48.477740  269883 cri.go:87] found id: ""
	I0701 23:10:48.477744  269883 cri.go:232] Stopping containers: [cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602 b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8 ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8 9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012 6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462 b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228]
	I0701 23:10:48.477788  269883 ssh_runner.go:195] Run: which crictl
	I0701 23:10:48.480366  269883 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602 b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8 ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8 9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012 6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462 b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228
	I0701 23:10:48.505890  269883 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:10:48.515195  269883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:10:48.521761  269883 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 22:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul  1 22:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul  1 22:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul  1 22:57 /etc/kubernetes/scheduler.conf
	
	I0701 23:10:48.521807  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 23:10:48.527978  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 23:10:48.534409  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 23:10:48.540704  269883 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.540749  269883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:10:48.547734  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 23:10:48.555417  269883 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.555456  269883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:10:48.561653  269883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:10:48.568679  269883 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:10:48.568731  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:48.610822  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.481354  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.661389  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.719236  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.825158  269883 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:10:49.825270  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.335318  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.834701  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.846390  269883 api_server.go:71] duration metric: took 1.021235424s to wait for apiserver process to appear ...
	I0701 23:10:50.846420  269883 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:10:50.846431  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:50.846825  269883 api_server.go:256] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0701 23:10:51.347542  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.133900  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 23:10:54.133986  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 23:10:54.347164  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.351414  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:10:54.351438  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:10:54.847723  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.852128  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:10:54.852158  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:10:55.347708  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:55.352265  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0701 23:10:55.358013  269883 api_server.go:140] control plane version: v1.24.2
	I0701 23:10:55.358035  269883 api_server.go:130] duration metric: took 4.511609554s to wait for apiserver health ...
	I0701 23:10:55.358045  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:55.358050  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:55.360161  269883 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:10:54.067513  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:56.566103  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:10:55.361441  269883 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:10:55.364979  269883 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:10:55.364998  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:10:55.377645  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:10:56.166732  269883 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:10:56.173254  269883 system_pods.go:59] 9 kube-system pods found
	I0701 23:10:56.173284  269883 system_pods.go:61] "coredns-6d4b75cb6d-mbfz4" [2ba91f90-b153-4f32-8309-108f0c8156db] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173292  269883 system_pods.go:61] "etcd-no-preload-20220701225718-10066" [eb03d3be-2878-4ae8-9dfc-5a4fccffca06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:10:56.173300  269883 system_pods.go:61] "kindnet-b5wkl" [bc770683-78b7-449f-a0af-5a2cc006275c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:10:56.173308  269883 system_pods.go:61] "kube-apiserver-no-preload-20220701225718-10066" [83390193-15db-49db-9ca3-065ebded60a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0701 23:10:56.173317  269883 system_pods.go:61] "kube-controller-manager-no-preload-20220701225718-10066" [086fda3b-1ef9-4e42-944f-4c20bbde78b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:10:56.173323  269883 system_pods.go:61] "kube-proxy-5ck82" [1b54a384-18b1-4c4f-84ab-fe3f8d2c3100] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 23:10:56.173328  269883 system_pods.go:61] "kube-scheduler-no-preload-20220701225718-10066" [87e67937-d3d1-47f6-9ee3-cb47460c5a96] Running
	I0701 23:10:56.173334  269883 system_pods.go:61] "metrics-server-5c6f97fb75-hqds8" [8c904dd9-6f61-494f-9ce0-b1e79f7a8f32] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173344  269883 system_pods.go:61] "storage-provisioner" [fb659ca7-b379-4467-bf65-4ae7b8b0b2a9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173348  269883 system_pods.go:74] duration metric: took 6.593831ms to wait for pod list to return data ...
	I0701 23:10:56.173354  269883 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:10:56.175724  269883 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:10:56.175753  269883 node_conditions.go:123] node cpu capacity is 8
	I0701 23:10:56.175768  269883 node_conditions.go:105] duration metric: took 2.40915ms to run NodePressure ...
	I0701 23:10:56.175789  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:56.319373  269883 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:10:56.323915  269883 kubeadm.go:777] kubelet initialised
	I0701 23:10:56.323936  269883 kubeadm.go:778] duration metric: took 4.537399ms waiting for restarted kubelet to initialise ...
	I0701 23:10:56.323943  269883 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:10:56.329062  269883 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" ...
	I0701 23:10:58.335246  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:10:59.066949  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:01.565821  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:00.835256  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:03.334510  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:03.566162  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:05.567345  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:05.835173  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:08.334350  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:08.067094  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:10.565414  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:12.566111  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:10.334372  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:12.335344  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:14.566344  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:17.065785  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:14.834372  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:16.834442  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:19.066060  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:21.066818  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:18.835141  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:21.335290  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:23.066855  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:25.566364  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:27.566761  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:23.835104  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:25.835407  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:28.334721  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:30.066364  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:32.066949  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:30.335044  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:32.834615  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:34.566677  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:37.066514  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:34.834740  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:36.835246  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:39.566283  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:42.066440  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:39.334127  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:41.334784  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:43.335046  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:44.066677  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:46.566155  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:45.335120  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:47.835290  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:49.067006  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:51.566261  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:50.334959  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:52.834501  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:53.566563  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:56.066781  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:55.335057  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:57.335198  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:58.566872  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:01.066349  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:11:59.835098  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:02.334947  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:03.066917  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:05.567287  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:04.335004  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:06.834679  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:08.067032  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:10.565807  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:12.568198  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:08.834990  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:11.334922  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:15.066443  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:17.066775  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:13.834714  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:16.335295  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:19.066984  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:21.566968  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:18.834428  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:20.834730  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:22.834980  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:24.065886  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:26.066500  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:24.835325  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:26.835428  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:28.566632  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:31.066025  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:29.335427  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:31.834460  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:33.066834  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:35.567919  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:33.835479  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:36.335438  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:38.066289  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:40.066776  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:42.067079  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:38.834828  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:41.335293  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:44.566023  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:47.066323  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:43.834535  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:46.334601  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:48.335004  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:49.567644  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:52.066841  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:50.335143  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:52.834795  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:54.566014  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:57.066882  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:55.334932  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:57.335218  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:59.566811  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:13:02.066846  245311 pod_ready.go:102] pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace has status "Ready":"False"
	I0701 23:12:59.834795  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:02.335059  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:03.569236  245311 pod_ready.go:81] duration metric: took 4m0.013189395s waiting for pod "coredns-5644d7b6d9-k97bn" in "kube-system" namespace to be "Ready" ...
	E0701 23:13:03.569260  245311 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0701 23:13:03.569268  245311 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zqzmg" in "kube-system" namespace to be "Ready" ...
	I0701 23:13:03.570751  245311 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-zqzmg" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-zqzmg" not found
	I0701 23:13:03.570776  245311 pod_ready.go:81] duration metric: took 1.502466ms waiting for pod "coredns-5644d7b6d9-zqzmg" in "kube-system" namespace to be "Ready" ...
	E0701 23:13:03.570784  245311 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-zqzmg" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-zqzmg" not found
	I0701 23:13:03.570798  245311 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4dnv" in "kube-system" namespace to be "Ready" ...
	I0701 23:13:03.574297  245311 pod_ready.go:92] pod "kube-proxy-g4dnv" in "kube-system" namespace has status "Ready":"True"
	I0701 23:13:03.574311  245311 pod_ready.go:81] duration metric: took 3.503795ms waiting for pod "kube-proxy-g4dnv" in "kube-system" namespace to be "Ready" ...
	I0701 23:13:03.574316  245311 pod_ready.go:38] duration metric: took 4m0.022968207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:13:03.574339  245311 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:13:03.574362  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0701 23:13:03.574406  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 23:13:03.598098  245311 cri.go:87] found id: "4b47f9926929819e69453420728cf3addca1506481e47f4c869fe0b9ecb1b05a"
	I0701 23:13:03.598124  245311 cri.go:87] found id: ""
	I0701 23:13:03.598132  245311 logs.go:274] 1 containers: [4b47f9926929819e69453420728cf3addca1506481e47f4c869fe0b9ecb1b05a]
	I0701 23:13:03.598183  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.601336  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0701 23:13:03.601389  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 23:13:03.625070  245311 cri.go:87] found id: "a55adf585063763c3db98dacb0da46e76d68ce38f0c45609857c6b9ffb1a06d0"
	I0701 23:13:03.625097  245311 cri.go:87] found id: ""
	I0701 23:13:03.625102  245311 logs.go:274] 1 containers: [a55adf585063763c3db98dacb0da46e76d68ce38f0c45609857c6b9ffb1a06d0]
	I0701 23:13:03.625138  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.627912  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0701 23:13:03.627965  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 23:13:03.651643  245311 cri.go:87] found id: "727be1a12a814c58548c78d384ece92824c96d9280a507808fd45ea07b1b46aa"
	I0701 23:13:03.651675  245311 cri.go:87] found id: ""
	I0701 23:13:03.651684  245311 logs.go:274] 1 containers: [727be1a12a814c58548c78d384ece92824c96d9280a507808fd45ea07b1b46aa]
	I0701 23:13:03.651732  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.654620  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0701 23:13:03.654673  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 23:13:03.676764  245311 cri.go:87] found id: "1f2d3fd9a60d0e2579160255fd9b2b375185ae8aea24eed201bcbb08463b65bb"
	I0701 23:13:03.676791  245311 cri.go:87] found id: ""
	I0701 23:13:03.676798  245311 logs.go:274] 1 containers: [1f2d3fd9a60d0e2579160255fd9b2b375185ae8aea24eed201bcbb08463b65bb]
	I0701 23:13:03.676845  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.679515  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0701 23:13:03.679568  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 23:13:03.702910  245311 cri.go:87] found id: "e4a74bac196c5e31b3f8c9c449789f1aa8ffc2993ca019c6d2b5a82e315c5223"
	I0701 23:13:03.702934  245311 cri.go:87] found id: ""
	I0701 23:13:03.702942  245311 logs.go:274] 1 containers: [e4a74bac196c5e31b3f8c9c449789f1aa8ffc2993ca019c6d2b5a82e315c5223]
	I0701 23:13:03.702986  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.705769  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 23:13:03.705823  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 23:13:03.728693  245311 cri.go:87] found id: "55d927862045cb2a519e3a468367905baae75074cdda05ae64fb65694f98cc4b"
	I0701 23:13:03.728719  245311 cri.go:87] found id: "3cfac1159d4773d5f176f91fa74733ee9b0045ab10842bd2a217eddf9421471a"
	I0701 23:13:03.728729  245311 cri.go:87] found id: ""
	I0701 23:13:03.728736  245311 logs.go:274] 2 containers: [55d927862045cb2a519e3a468367905baae75074cdda05ae64fb65694f98cc4b 3cfac1159d4773d5f176f91fa74733ee9b0045ab10842bd2a217eddf9421471a]
	I0701 23:13:03.728778  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.731678  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.734368  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0701 23:13:03.734438  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 23:13:03.756602  245311 cri.go:87] found id: "08c1ea895fea68162f932135dc53d50d05dd9e25682cae5da142785d5e5da6c0"
	I0701 23:13:03.756623  245311 cri.go:87] found id: "ad010e052537e0cab1713f6b0445a1b3134cb1188c3ff2b5eb9f68e9818c8143"
	I0701 23:13:03.756634  245311 cri.go:87] found id: ""
	I0701 23:13:03.756641  245311 logs.go:274] 2 containers: [08c1ea895fea68162f932135dc53d50d05dd9e25682cae5da142785d5e5da6c0 ad010e052537e0cab1713f6b0445a1b3134cb1188c3ff2b5eb9f68e9818c8143]
	I0701 23:13:03.756675  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.759300  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.761790  245311 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 23:13:03.761844  245311 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 23:13:03.783728  245311 cri.go:87] found id: "d9f13d77694b02b2ceed82bd2f28b0172dfc05b9f0d7b457accd742a791866bc"
	I0701 23:13:03.783756  245311 cri.go:87] found id: ""
	I0701 23:13:03.783763  245311 logs.go:274] 1 containers: [d9f13d77694b02b2ceed82bd2f28b0172dfc05b9f0d7b457accd742a791866bc]
	I0701 23:13:03.783792  245311 ssh_runner.go:195] Run: which crictl
	I0701 23:13:03.786429  245311 logs.go:123] Gathering logs for kube-apiserver [4b47f9926929819e69453420728cf3addca1506481e47f4c869fe0b9ecb1b05a] ...
	I0701 23:13:03.786449  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b47f9926929819e69453420728cf3addca1506481e47f4c869fe0b9ecb1b05a"
	I0701 23:13:03.829813  245311 logs.go:123] Gathering logs for coredns [727be1a12a814c58548c78d384ece92824c96d9280a507808fd45ea07b1b46aa] ...
	I0701 23:13:03.829834  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 727be1a12a814c58548c78d384ece92824c96d9280a507808fd45ea07b1b46aa"
	I0701 23:13:03.868691  245311 logs.go:123] Gathering logs for kubernetes-dashboard [55d927862045cb2a519e3a468367905baae75074cdda05ae64fb65694f98cc4b] ...
	I0701 23:13:03.868726  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55d927862045cb2a519e3a468367905baae75074cdda05ae64fb65694f98cc4b"
	I0701 23:13:03.892505  245311 logs.go:123] Gathering logs for container status ...
	I0701 23:13:03.892532  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 23:13:03.918299  245311 logs.go:123] Gathering logs for kube-proxy [e4a74bac196c5e31b3f8c9c449789f1aa8ffc2993ca019c6d2b5a82e315c5223] ...
	I0701 23:13:03.918325  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4a74bac196c5e31b3f8c9c449789f1aa8ffc2993ca019c6d2b5a82e315c5223"
	I0701 23:13:03.940698  245311 logs.go:123] Gathering logs for storage-provisioner [08c1ea895fea68162f932135dc53d50d05dd9e25682cae5da142785d5e5da6c0] ...
	I0701 23:13:03.940733  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08c1ea895fea68162f932135dc53d50d05dd9e25682cae5da142785d5e5da6c0"
	I0701 23:13:03.964217  245311 logs.go:123] Gathering logs for etcd [a55adf585063763c3db98dacb0da46e76d68ce38f0c45609857c6b9ffb1a06d0] ...
	I0701 23:13:03.964246  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a55adf585063763c3db98dacb0da46e76d68ce38f0c45609857c6b9ffb1a06d0"
	I0701 23:13:03.998956  245311 logs.go:123] Gathering logs for kube-scheduler [1f2d3fd9a60d0e2579160255fd9b2b375185ae8aea24eed201bcbb08463b65bb] ...
	I0701 23:13:03.998982  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f2d3fd9a60d0e2579160255fd9b2b375185ae8aea24eed201bcbb08463b65bb"
	I0701 23:13:04.028494  245311 logs.go:123] Gathering logs for kube-controller-manager [d9f13d77694b02b2ceed82bd2f28b0172dfc05b9f0d7b457accd742a791866bc] ...
	I0701 23:13:04.028523  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9f13d77694b02b2ceed82bd2f28b0172dfc05b9f0d7b457accd742a791866bc"
	I0701 23:13:04.080296  245311 logs.go:123] Gathering logs for containerd ...
	I0701 23:13:04.080327  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0701 23:13:04.142861  245311 logs.go:123] Gathering logs for storage-provisioner [ad010e052537e0cab1713f6b0445a1b3134cb1188c3ff2b5eb9f68e9818c8143] ...
	I0701 23:13:04.142890  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad010e052537e0cab1713f6b0445a1b3134cb1188c3ff2b5eb9f68e9818c8143"
	I0701 23:13:04.166954  245311 logs.go:123] Gathering logs for kubelet ...
	I0701 23:13:04.166980  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 23:13:04.192126  245311 logs.go:138] Found kubelet problem: Jul 01 23:03:51 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:03:51.863315     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.192306  245311 logs.go:138] Found kubelet problem: Jul 01 23:03:52 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:03:52.464784     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.193491  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:04 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:04.311457     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.193649  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:05 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:05.489840     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.194081  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:16 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:16.218877     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.195291  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:28 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:28.231400     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.195447  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:39 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:39.219001     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.195602  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:50 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:50.218928     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.195759  245311 logs.go:138] Found kubelet problem: Jul 01 23:04:51 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:04:51.585326     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.195915  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:02 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:02.218268     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.196081  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:04 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:04.219008     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.196984  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:18 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:18.231346     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.197147  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:30 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:30.222082     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.197299  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:43 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:43.218929     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.197454  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:45 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:45.696518     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.197611  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:56 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:56.219007     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.197762  245311 logs.go:138] Found kubelet problem: Jul 01 23:05:57 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:05:57.218434     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.197912  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:09 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:09.219015     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.198069  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:11 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:11.218959     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.198215  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:15 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:15.759238     770 pod_workers.go:191] Error syncing pod 0f6b7680-dbef-4a34-8e81-5e9a14db6993 ("kindnet-gmgzk_kube-system(0f6b7680-dbef-4a34-8e81-5e9a14db6993)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 10s restarting failed container=kindnet-cni pod=kindnet-gmgzk_kube-system(0f6b7680-dbef-4a34-8e81-5e9a14db6993)"
	W0701 23:13:04.198377  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:23 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:23.218264     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.198565  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:25 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:25.219000     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.198807  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:38 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:38.219041     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.199714  245311 logs.go:138] Found kubelet problem: Jul 01 23:06:51 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:06:51.258634     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.199868  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:06 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:06.219190     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.200019  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:06 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:06.873041     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.200167  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:19 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:19.218858     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.200323  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:19 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:19.219602     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.200470  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:30 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:30.218243     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.200621  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:31 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:31.219172     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.200776  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:42 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:42.218206     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.200928  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:43 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:43.218901     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.201122  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:55 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:55.218250     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.201303  245311 logs.go:138] Found kubelet problem: Jul 01 23:07:56 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:07:56.218823     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.201462  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:09 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:09.218748     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.201615  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:10 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:10.218798     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.201771  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:21 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:21.219015     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.201955  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:22 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:22.218294     770 pod_workers.go:191] Error syncing pod 87527843-3c92-4175-b5ba-a2e3f4e67c03 ("storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87527843-3c92-4175-b5ba-a2e3f4e67c03)"
	W0701 23:13:04.202114  245311 logs.go:138] Found kubelet problem: Jul 01 23:08:34 old-k8s-version-20220701225700-10066 kubelet[770]: E0701 23:08:34.218766     770 pod_workers.go:191] Error syncing pod d4265228-2e21-45eb-b014-cfe41ded886d ("metrics-server-7958775c-8srrs_kube-system(d4265228-2e21-45eb-b014-cfe41ded886d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.239697  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:03 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:03.247569    4764 pod_workers.go:191] Error syncing pod 4a3b2350-c424-4b64-bc26-464c6485c295 ("coredns-5644d7b6d9-zqzmg_kube-system(4a3b2350-c424-4b64-bc26-464c6485c295)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\""
	W0701 23:13:04.242261  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:14 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:14.473432    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.242417  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:15 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:15.457415    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.242602  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:17 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:17.463867    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.242789  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:18 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:18.466920    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.243021  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:19 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:19.468338    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.243989  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:29 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:29.269820    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.244166  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:34 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:34.496554    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.244331  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:38 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:38.196082    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.244502  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:41 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:41.256927    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.244679  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:49 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:49.256485    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.245579  245311 logs.go:138] Found kubelet problem: Jul 01 23:09:55 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:09:55.337448    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.245750  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:01 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:01.550176    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.245912  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:06 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:06.256928    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.246059  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:06 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:06.562670    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.246227  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:08 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:08.195936    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.246388  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:12 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:12.577430    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.246578  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:18 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:18.253487    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.246737  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:20 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:20.257021    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.246910  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:22 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:22.256362    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.247063  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:33 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:33.256950    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.247225  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:35 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:35.256288    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.248138  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:46 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:46.357751    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.248313  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:48 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:48.645508    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.248486  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:50 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:50.653652    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.248658  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:58 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:58.196029    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.248821  245311 logs.go:138] Found kubelet problem: Jul 01 23:10:59 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:10:59.256718    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.248991  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:00 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:00.675487    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.249141  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:06 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:06.256399    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.249303  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:08 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:08.253683    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.249467  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:09 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:09.256386    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.249623  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:10 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:10.257019    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.249795  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:20 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:20.256357    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.249957  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:20 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:20.256845    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.250117  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:21 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:21.256771    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.250280  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:31 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:31.256281    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.250433  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:35 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:35.257030    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.250621  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:44 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:44.256624    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.250774  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:47 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:47.763173    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.250939  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:49 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:49.256705    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.251103  245311 logs.go:138] Found kubelet problem: Jul 01 23:11:58 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:11:58.257081    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.251258  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:00 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:00.256251    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.251418  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:02 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:02.256992    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.251580  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:05 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:05.796542    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.251751  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:08 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:08.253712    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.251928  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:11 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:11.810210    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.252099  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:12 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:12.256324    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.252267  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:18 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:18.195937    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.253211  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:18 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:18.614311    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0701 23:13:04.253381  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:23 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:23.256290    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.253531  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:26 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:26.256319    4764 pod_workers.go:191] Error syncing pod a3c51354-ae42-4173-a10c-a7007d23cf91 ("storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3c51354-ae42-4173-a10c-a7007d23cf91)"
	W0701 23:13:04.253701  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:32 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:32.256483    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.253855  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:32 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:32.257144    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.254028  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:37 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:37.256349    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.254182  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:45 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:45.256839    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.254353  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:47 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:47.256227    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.254513  245311 logs.go:138] Found kubelet problem: Jul 01 23:12:59 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:59.256850    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.254694  245311 logs.go:138] Found kubelet problem: Jul 01 23:13:00 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:13:00.256349    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	I0701 23:13:04.254706  245311 logs.go:123] Gathering logs for dmesg ...
	I0701 23:13:04.254717  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 23:13:04.271353  245311 logs.go:123] Gathering logs for describe nodes ...
	I0701 23:13:04.271380  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 23:13:04.359204  245311 logs.go:123] Gathering logs for kubernetes-dashboard [3cfac1159d4773d5f176f91fa74733ee9b0045ab10842bd2a217eddf9421471a] ...
	I0701 23:13:04.359237  245311 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cfac1159d4773d5f176f91fa74733ee9b0045ab10842bd2a217eddf9421471a"
	I0701 23:13:04.383261  245311 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:04.383286  245311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0701 23:13:04.383393  245311 out.go:239] X Problems detected in kubelet:
	W0701 23:13:04.383406  245311 out.go:239]   Jul 01 23:12:37 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:37.256349    4764 pod_workers.go:191] Error syncing pod bd5c7273-9787-44fc-8bfe-b5ec02aa2335 ("kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-958c5c65f-kvb25_kubernetes-dashboard(bd5c7273-9787-44fc-8bfe-b5ec02aa2335)"
	W0701 23:13:04.383429  245311 out.go:239]   Jul 01 23:12:45 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:45.256839    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.383440  245311 out.go:239]   Jul 01 23:12:47 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:47.256227    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	W0701 23:13:04.383448  245311 out.go:239]   Jul 01 23:12:59 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:12:59.256850    4764 pod_workers.go:191] Error syncing pod ddaee35b-31ec-4248-8d4f-032e18844ccd ("metrics-server-7958775c-49wzt_kube-system(ddaee35b-31ec-4248-8d4f-032e18844ccd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	W0701 23:13:04.383460  245311 out.go:239]   Jul 01 23:13:00 old-k8s-version-20220701225700-10066 kubelet[4764]: E0701 23:13:00.256349    4764 pod_workers.go:191] Error syncing pod 854db8ce-8906-4576-bbd6-e32235b0bf80 ("dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-lrcbs_kubernetes-dashboard(854db8ce-8906-4576-bbd6-e32235b0bf80)"
	I0701 23:13:04.383466  245311 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:04.383475  245311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:04.834809  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:06.835212  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:09.334808  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:11.334961  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:13.335410  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e85724e2f1379       6fb66cd78abfe       3 minutes ago       Exited              kindnet-cni               3                   4386b73a1f791
	b63fba32c68cc       a634548d10b03       12 minutes ago      Running             kube-proxy                0                   8996b4de19f2f
	50e0bf3dbb8c1       aebe758cef4cd       12 minutes ago      Running             etcd                      0                   0e8080292fce1
	f41d2b7f1a0c9       34cdf99b1bb3b       12 minutes ago      Running             kube-controller-manager   0                   42cd5575a78ac
	a349e45d95bb6       d3377ffb7177c       12 minutes ago      Running             kube-apiserver            0                   b016974955465
	042166814f4c8       5d725196c1f47       12 minutes ago      Running             kube-scheduler            0                   a077b2a3977f0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 23:00:41 UTC, end at Fri 2022-07-01 23:13:14 UTC. --
	Jul 01 23:06:29 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:29.878015509Z" level=warning msg="cleaning up after shim disconnected" id=cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452 namespace=k8s.io
	Jul 01 23:06:29 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:29.878034759Z" level=info msg="cleaning up dead shim"
	Jul 01 23:06:29 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:29.887840768Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:06:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2461 runtime=io.containerd.runc.v2\n"
	Jul 01 23:06:30 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:30.628255801Z" level=info msg="RemoveContainer for \"c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a\""
	Jul 01 23:06:30 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:30.632295226Z" level=info msg="RemoveContainer for \"c11940cc2c8ec5e2f7fd5f8efbe605f5483acdebf6732ce60b631524b5e42b6a\" returns successfully"
	Jul 01 23:06:42 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:42.955995884Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jul 01 23:06:42 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:42.968659548Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\""
	Jul 01 23:06:42 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:42.969110368Z" level=info msg="StartContainer for \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\""
	Jul 01 23:06:43 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:06:43.035389439Z" level=info msg="StartContainer for \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\" returns successfully"
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.467513671Z" level=info msg="shim disconnected" id=054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.467581950Z" level=warning msg="cleaning up after shim disconnected" id=054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b namespace=k8s.io
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.467597827Z" level=info msg="cleaning up dead shim"
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.477093004Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:09:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n"
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.948165580Z" level=info msg="RemoveContainer for \"cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452\""
	Jul 01 23:09:23 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:23.952191227Z" level=info msg="RemoveContainer for \"cb48669a69f64e2f17c64425e02c1ae6dfa44b7dc264a7992b2952f09646f452\" returns successfully"
	Jul 01 23:09:46 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:46.956607181Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jul 01 23:09:46 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:46.969040688Z" level=info msg="CreateContainer within sandbox \"4386b73a1f791d0ffc3776da483f1f4457dfbaa0c69557993b95cb6067747d32\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e\""
	Jul 01 23:09:46 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:46.969484685Z" level=info msg="StartContainer for \"e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e\""
	Jul 01 23:09:47 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:09:47.221441787Z" level=info msg="StartContainer for \"e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e\" returns successfully"
	Jul 01 23:12:27 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:27.639406039Z" level=info msg="shim disconnected" id=e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e
	Jul 01 23:12:27 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:27.639466172Z" level=warning msg="cleaning up after shim disconnected" id=e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e namespace=k8s.io
	Jul 01 23:12:27 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:27.639479354Z" level=info msg="cleaning up dead shim"
	Jul 01 23:12:27 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:27.648670720Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:12:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2674 runtime=io.containerd.runc.v2\n"
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:28.285076104Z" level=info msg="RemoveContainer for \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\""
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 containerd[516]: time="2022-07-01T23:12:28.291561861Z" level=info msg="RemoveContainer for \"054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220701230032-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220701230032-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T23_00_55_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 23:00:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220701230032-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:13:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:11:17 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:11:17 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:11:17 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:11:17 +0000   Fri, 01 Jul 2022 23:00:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220701230032-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                674fca36-2ebb-426c-b65b-bd78bdb510f5
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220701230032-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-49h72                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220701230032-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220701230032-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-qg5j2                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220701230032-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m   node-controller  Node default-k8s-different-port-20220701230032-10066 event: Registered Node default-k8s-different-port-20220701230032-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52] <==
	* {"level":"info","ts":"2022-07-01T23:00:48.724Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220701230032-10066 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.354Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:00:49.355Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-01T23:00:49.355Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-07-01T23:05:43.769Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"187.454985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-07-01T23:05:43.769Z","caller":"traceutil/trace.go:171","msg":"trace[70078069] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:450; }","duration":"187.563797ms","start":"2022-07-01T23:05:43.582Z","end":"2022-07-01T23:05:43.769Z","steps":["trace[70078069] 'agreement among raft nodes before linearized reading'  (duration: 92.830553ms)","trace[70078069] 'range keys from in-memory index tree'  (duration: 94.58687ms)"],"step_count":2}
	{"level":"warn","ts":"2022-07-01T23:05:43.769Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.015467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-07-01T23:05:43.769Z","caller":"traceutil/trace.go:171","msg":"trace[192464171] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:450; }","duration":"120.215201ms","start":"2022-07-01T23:05:43.649Z","end":"2022-07-01T23:05:43.769Z","steps":["trace[192464171] 'agreement among raft nodes before linearized reading'  (duration: 25.42883ms)","trace[192464171] 'range keys from in-memory index tree'  (duration: 94.565934ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-01T23:10:49.661Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":451}
	{"level":"info","ts":"2022-07-01T23:10:49.661Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":451,"took":"481.342µs"}
	
	* 
	* ==> kernel <==
	*  23:13:14 up 55 min,  0 users,  load average: 0.50, 0.89, 1.55
	Linux default-k8s-different-port-20220701230032-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2] <==
	* I0701 23:00:52.122528       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 23:00:52.122739       1 cache.go:39] Caches are synced for autoregister controller
	I0701 23:00:52.122766       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0701 23:00:52.126063       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0701 23:00:52.126674       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0701 23:00:52.138004       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 23:00:52.142988       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0701 23:00:52.766811       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 23:00:53.027362       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0701 23:00:53.030604       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0701 23:00:53.030622       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 23:00:53.389140       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 23:00:53.433846       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 23:00:53.563524       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0701 23:00:53.568204       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0701 23:00:53.569200       1 controller.go:611] quota admission added evaluator for: endpoints
	I0701 23:00:53.572998       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 23:00:54.150574       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0701 23:00:54.803526       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0701 23:00:54.810011       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0701 23:00:54.817885       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0701 23:00:54.923302       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 23:01:07.657482       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0701 23:01:07.805142       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0701 23:01:08.440379       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c] <==
	* I0701 23:01:06.998693       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0701 23:01:06.999829       1 shared_informer.go:262] Caches are synced for service account
	I0701 23:01:07.000984       1 shared_informer.go:262] Caches are synced for PV protection
	I0701 23:01:07.002759       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0701 23:01:07.010484       1 shared_informer.go:262] Caches are synced for expand
	I0701 23:01:07.021740       1 shared_informer.go:262] Caches are synced for stateful set
	I0701 23:01:07.048510       1 shared_informer.go:262] Caches are synced for disruption
	I0701 23:01:07.048533       1 disruption.go:371] Sending events to api server.
	I0701 23:01:07.050704       1 shared_informer.go:262] Caches are synced for daemon sets
	I0701 23:01:07.154573       1 shared_informer.go:262] Caches are synced for attach detach
	I0701 23:01:07.172619       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0701 23:01:07.198444       1 shared_informer.go:262] Caches are synced for endpoint
	I0701 23:01:07.207329       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 23:01:07.226645       1 shared_informer.go:262] Caches are synced for resource quota
	I0701 23:01:07.255294       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0701 23:01:07.625587       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 23:01:07.659581       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0701 23:01:07.683577       1 shared_informer.go:262] Caches are synced for garbage collector
	I0701 23:01:07.683598       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0701 23:01:07.810761       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-49h72"
	I0701 23:01:07.812355       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qg5j2"
	I0701 23:01:08.007720       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-j7d7h"
	I0701 23:01:08.013547       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zmnqs"
	I0701 23:01:08.206059       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0701 23:01:08.211257       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-j7d7h"
	
	* 
	* ==> kube-proxy [b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7] <==
	* I0701 23:01:08.413673       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0701 23:01:08.413740       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0701 23:01:08.413778       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 23:01:08.436458       1 server_others.go:206] "Using iptables Proxier"
	I0701 23:01:08.436499       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 23:01:08.436509       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 23:01:08.436529       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 23:01:08.436562       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:01:08.436755       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:01:08.437083       1 server.go:661] "Version info" version="v1.24.2"
	I0701 23:01:08.437106       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 23:01:08.437672       1 config.go:226] "Starting endpoint slice config controller"
	I0701 23:01:08.437701       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 23:01:08.438341       1 config.go:317] "Starting service config controller"
	I0701 23:01:08.438370       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 23:01:08.438585       1 config.go:444] "Starting node config controller"
	I0701 23:01:08.438745       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 23:01:08.538588       1 shared_informer.go:262] Caches are synced for service config
	I0701 23:01:08.538607       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 23:01:08.539109       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c] <==
	* W0701 23:00:52.118488       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:52.119392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:52.119370       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 23:00:52.119451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 23:00:52.119361       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:52.119469       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:52.118588       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 23:00:52.119594       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 23:00:52.965924       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 23:00:52.965973       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 23:00:52.981058       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 23:00:52.981091       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 23:00:53.008284       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:00:53.008567       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 23:00:53.024485       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 23:00:53.024517       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 23:00:53.118081       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 23:00:53.118128       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 23:00:53.118261       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 23:00:53.118301       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 23:00:53.171211       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:00:53.171246       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:00:53.218071       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 23:00:53.218112       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0701 23:00:55.254285       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 23:00:41 UTC, end at Fri 2022-07-01 23:13:14 UTC. --
	Jul 01 23:11:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:11:55.332151    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:00 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:00.333609    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:05 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:05.335006    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:10 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:10.336509    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:15 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:15.337730    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:20 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:20.339305    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:25 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:25.340888    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:12:28.283925    1325 scope.go:110] "RemoveContainer" containerID="054a85a49188bd722f8f2b3188f6bbcb67031b2c9d8d640d82248945b99fb88b"
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:12:28.284085    1325 scope.go:110] "RemoveContainer" containerID="e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	Jul 01 23:12:28 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:28.284439    1325 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-49h72_kube-system(bee4a070-eb2f-45af-a824-f8ebb08e21cb)\"" pod="kube-system/kindnet-49h72" podUID=bee4a070-eb2f-45af-a824-f8ebb08e21cb
	Jul 01 23:12:30 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:30.341675    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:35 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:35.342607    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:40 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:40.343685    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:41 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:12:41.953946    1325 scope.go:110] "RemoveContainer" containerID="e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	Jul 01 23:12:41 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:41.954231    1325 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-49h72_kube-system(bee4a070-eb2f-45af-a824-f8ebb08e21cb)\"" pod="kube-system/kindnet-49h72" podUID=bee4a070-eb2f-45af-a824-f8ebb08e21cb
	Jul 01 23:12:45 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:45.344251    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:50 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:50.345243    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:55.346082    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:12:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:12:55.953939    1325 scope.go:110] "RemoveContainer" containerID="e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	Jul 01 23:12:55 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:12:55.954220    1325 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-49h72_kube-system(bee4a070-eb2f-45af-a824-f8ebb08e21cb)\"" pod="kube-system/kindnet-49h72" podUID=bee4a070-eb2f-45af-a824-f8ebb08e21cb
	Jul 01 23:13:00 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:13:00.347391    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:13:05 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:13:05.347977    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:13:06 default-k8s-different-port-20220701230032-10066 kubelet[1325]: I0701 23:13:06.953864    1325 scope.go:110] "RemoveContainer" containerID="e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	Jul 01 23:13:06 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:13:06.954134    1325 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-49h72_kube-system(bee4a070-eb2f-45af-a824-f8ebb08e21cb)\"" pod="kube-system/kindnet-49h72" podUID=bee4a070-eb2f-45af-a824-f8ebb08e21cb
	Jul 01 23:13:10 default-k8s-different-port-20220701230032-10066 kubelet[1325]: E0701 23:13:10.348740    1325 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-6d4b75cb6d-zmnqs storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 describe pod busybox coredns-6d4b75cb6d-zmnqs storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod busybox coredns-6d4b75cb6d-zmnqs storage-provisioner: exit status 1 (63.636395ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6qwg6 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-6qwg6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m49s (x2 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-zmnqs" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod busybox coredns-6d4b75cb6d-zmnqs storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (534.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220701225718-10066 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0701 23:10:34.503756   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 23:10:35.082719   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 23:10:43.468144   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:10:51.855320   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:11:42.423080   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:11:47.697424   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:12:00.873733   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:12:32.034407   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-20220701225718-10066 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: exit status 80 (8m52.14255622s)

                                                
                                                
-- stdout --
	* [no-preload-20220701225718-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20220701225718-10066 in cluster no-preload-20220701225718-10066
	* Pulling base image ...
	* Restarting existing docker container for "no-preload-20220701225718-10066" ...
	* Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.6.0
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 23:10:28.436068  269883 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:10:28.436180  269883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:10:28.436194  269883 out.go:309] Setting ErrFile to fd 2...
	I0701 23:10:28.436201  269883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:10:28.436618  269883 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:10:28.436880  269883 out.go:303] Setting JSON to false
	I0701 23:10:28.438233  269883 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3182,"bootTime":1656713847,"procs":500,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:10:28.438304  269883 start.go:125] virtualization: kvm guest
	I0701 23:10:28.441407  269883 out.go:177] * [no-preload-20220701225718-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:10:28.443028  269883 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:10:28.442993  269883 notify.go:193] Checking for updates...
	I0701 23:10:28.444809  269883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:10:28.446488  269883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:10:28.448125  269883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:10:28.449746  269883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:10:28.451761  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:10:28.452167  269883 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:10:28.493617  269883 docker.go:137] docker version: linux-20.10.17
	I0701 23:10:28.493713  269883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:10:28.600580  269883 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:10:28.523096353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:10:28.600696  269883 docker.go:254] overlay module found
	I0701 23:10:28.603102  269883 out.go:177] * Using the docker driver based on existing profile
	I0701 23:10:28.604609  269883 start.go:284] selected driver: docker
	I0701 23:10:28.604630  269883 start.go:808] validating driver "docker" against &{Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:28.604744  269883 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:10:28.605512  269883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:10:28.710819  269883 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:10:28.635526958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:10:28.711050  269883 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:10:28.711069  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:28.711075  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:28.711086  269883 start_flags.go:310] config:
	{Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:28.713353  269883 out.go:177] * Starting control plane node no-preload-20220701225718-10066 in cluster no-preload-20220701225718-10066
	I0701 23:10:28.714773  269883 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:10:28.716148  269883 out.go:177] * Pulling base image ...
	I0701 23:10:28.717448  269883 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:10:28.717489  269883 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:10:28.717646  269883 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json ...
	I0701 23:10:28.717791  269883 cache.go:107] acquiring lock: {Name:mk3aed9edf4e045130f7a3c6fdc7a324a577ec7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717837  269883 cache.go:107] acquiring lock: {Name:mk8030c0afbd72b38281e129af86f3686df5df89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717878  269883 cache.go:107] acquiring lock: {Name:mk7ec70fd71856cc28acc69a0da3b72748a4420a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717910  269883 cache.go:107] acquiring lock: {Name:mk881497b5d07c75cf2f158738d77e27bd2a369d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717808  269883 cache.go:107] acquiring lock: {Name:mk9ab11f02b498228e877e934d5aaa541b21cbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717960  269883 cache.go:107] acquiring lock: {Name:mk5766c1b843c08c650f7c84836d8506a465b496 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.717997  269883 cache.go:107] acquiring lock: {Name:mk3b0e90d77cbe629b1ed14b104838f8ec036785 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.718031  269883 cache.go:107] acquiring lock: {Name:mk72f6f6d64839ffc62747fa568c11250cb4422d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.718093  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 exists
	I0701 23:10:28.718103  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0701 23:10:28.718116  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 exists
	I0701 23:10:28.718122  269883 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 184.555µs
	I0701 23:10:28.718134  269883 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0701 23:10:28.718115  269883 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2" took 279.008µs
	I0701 23:10:28.718142  269883 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 succeeded
	I0701 23:10:28.718093  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 23:10:28.718153  269883 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2" took 281.275µs
	I0701 23:10:28.718166  269883 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 succeeded
	I0701 23:10:28.718164  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 exists
	I0701 23:10:28.718177  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 exists
	I0701 23:10:28.718210  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 exists
	I0701 23:10:28.718216  269883 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2" took 228.65µs
	I0701 23:10:28.718224  269883 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.24.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2" took 427.452µs
	I0701 23:10:28.718233  269883 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 succeeded
	I0701 23:10:28.718167  269883 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 391.086µs
	I0701 23:10:28.718242  269883 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 23:10:28.718249  269883 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 exists
	I0701 23:10:28.718229  269883 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.24.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 succeeded
	I0701 23:10:28.718187  269883 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0" took 349.266µs
	I0701 23:10:28.718259  269883 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 succeeded
	I0701 23:10:28.718262  269883 cache.go:96] cache image "k8s.gcr.io/pause:3.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7" took 344.995µs
	I0701 23:10:28.718273  269883 cache.go:80] save to tar file k8s.gcr.io/pause:3.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 succeeded
	I0701 23:10:28.718283  269883 cache.go:87] Successfully saved all images to host disk.
	I0701 23:10:28.752422  269883 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:10:28.752465  269883 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:10:28.752487  269883 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:10:28.752528  269883 start.go:352] acquiring machines lock for no-preload-20220701225718-10066: {Name:mk0df5e406dc07f9b5bbaf453954c11d3f5f2a14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:10:28.752631  269883 start.go:356] acquired machines lock for "no-preload-20220701225718-10066" in 71.505µs
	I0701 23:10:28.752665  269883 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:10:28.752673  269883 fix.go:55] fixHost starting: 
	I0701 23:10:28.752958  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:10:28.785790  269883 fix.go:103] recreateIfNeeded on no-preload-20220701225718-10066: state=Stopped err=<nil>
	W0701 23:10:28.785828  269883 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:10:28.788364  269883 out.go:177] * Restarting existing docker container for "no-preload-20220701225718-10066" ...
	I0701 23:10:28.789918  269883 cli_runner.go:164] Run: docker start no-preload-20220701225718-10066
	I0701 23:10:29.179864  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:10:29.217535  269883 kic.go:416] container "no-preload-20220701225718-10066" state is running.
	I0701 23:10:29.217931  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:29.251855  269883 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/config.json ...
	I0701 23:10:29.252122  269883 machine.go:88] provisioning docker machine ...
	I0701 23:10:29.252152  269883 ubuntu.go:169] provisioning hostname "no-preload-20220701225718-10066"
	I0701 23:10:29.252196  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:29.287524  269883 main.go:134] libmachine: Using SSH client type: native
	I0701 23:10:29.287708  269883 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0701 23:10:29.287733  269883 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220701225718-10066 && echo "no-preload-20220701225718-10066" | sudo tee /etc/hostname
	I0701 23:10:29.288440  269883 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37316->127.0.0.1:49437: read: connection reset by peer
	I0701 23:10:32.419154  269883 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220701225718-10066
	
	I0701 23:10:32.419236  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.453352  269883 main.go:134] libmachine: Using SSH client type: native
	I0701 23:10:32.453538  269883 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0701 23:10:32.453573  269883 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220701225718-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220701225718-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220701225718-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:10:32.570226  269883 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:10:32.570259  269883 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:10:32.570290  269883 ubuntu.go:177] setting up certificates
	I0701 23:10:32.570314  269883 provision.go:83] configureAuth start
	I0701 23:10:32.570364  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:32.604673  269883 provision.go:138] copyHostCerts
	I0701 23:10:32.604741  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:10:32.604764  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:10:32.604850  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:10:32.605244  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:10:32.605267  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:10:32.605317  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:10:32.605447  269883 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:10:32.605456  269883 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:10:32.605493  269883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:10:32.605552  269883 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220701225718-10066 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220701225718-10066]
	I0701 23:10:32.772605  269883 provision.go:172] copyRemoteCerts
	I0701 23:10:32.772663  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:10:32.772694  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.806036  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:32.889557  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:10:32.906187  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0701 23:10:32.922754  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 23:10:32.939238  269883 provision.go:86] duration metric: configureAuth took 368.908559ms
	I0701 23:10:32.939268  269883 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:10:32.939429  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:10:32.939441  269883 machine.go:91] provisioned docker machine in 3.687302971s
	I0701 23:10:32.939447  269883 start.go:306] post-start starting for "no-preload-20220701225718-10066" (driver="docker")
	I0701 23:10:32.939452  269883 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:10:32.939491  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:10:32.939527  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:32.975147  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.062201  269883 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:10:33.065814  269883 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:10:33.065840  269883 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:10:33.065854  269883 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:10:33.065866  269883 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:10:33.065885  269883 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:10:33.065955  269883 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:10:33.066065  269883 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:10:33.066201  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:10:33.073331  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:10:33.089719  269883 start.go:309] post-start completed in 150.262783ms
	I0701 23:10:33.089782  269883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:10:33.089819  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.125536  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.210970  269883 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:10:33.214873  269883 fix.go:57] fixHost completed within 4.462195685s
	I0701 23:10:33.214897  269883 start.go:81] releasing machines lock for "no-preload-20220701225718-10066", held for 4.462242204s
	I0701 23:10:33.214986  269883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220701225718-10066
	I0701 23:10:33.248938  269883 ssh_runner.go:195] Run: systemctl --version
	I0701 23:10:33.248978  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.249031  269883 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:10:33.249088  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:10:33.285027  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.286024  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:10:33.386339  269883 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:10:33.397864  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:10:33.407068  269883 docker.go:179] disabling docker service ...
	I0701 23:10:33.407108  269883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:10:33.416965  269883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:10:33.425446  269883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:10:33.498217  269883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:10:33.568864  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:10:33.577568  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:10:33.589825  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:10:33.598932  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:10:33.606840  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:10:33.614425  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:10:33.622221  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:10:33.629559  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:10:33.642101  269883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:10:33.648858  269883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:10:33.655601  269883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:10:33.724238  269883 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:10:33.794793  269883 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:10:33.794860  269883 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:10:33.798329  269883 start.go:471] Will wait 60s for crictl version
	I0701 23:10:33.798381  269883 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:10:33.824964  269883 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:10:33Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:10:44.872066  269883 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:10:44.894512  269883 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:10:44.894588  269883 ssh_runner.go:195] Run: containerd --version
	I0701 23:10:44.922163  269883 ssh_runner.go:195] Run: containerd --version
	I0701 23:10:44.951446  269883 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:10:44.952886  269883 cli_runner.go:164] Run: docker network inspect no-preload-20220701225718-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:10:44.987019  269883 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0701 23:10:44.990236  269883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:10:44.999796  269883 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:10:44.999840  269883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:10:45.023088  269883 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:10:45.023106  269883 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:10:45.023142  269883 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:10:45.045429  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:45.045449  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:45.045462  269883 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:10:45.045472  269883 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220701225718-10066 NodeName:no-preload-20220701225718-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:10:45.045591  269883 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220701225718-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:10:45.045663  269883 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220701225718-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 23:10:45.045704  269883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:10:45.052996  269883 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:10:45.053052  269883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:10:45.059599  269883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0701 23:10:45.073222  269883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:10:45.085371  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0701 23:10:45.097222  269883 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:10:45.099941  269883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:10:45.108409  269883 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066 for IP: 192.168.94.2
	I0701 23:10:45.108501  269883 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:10:45.108550  269883 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:10:45.108623  269883 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/client.key
	I0701 23:10:45.108682  269883 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key.ad8e880a
	I0701 23:10:45.108742  269883 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key
	I0701 23:10:45.108853  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:10:45.108900  269883 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:10:45.108917  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:10:45.108949  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:10:45.108984  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:10:45.109016  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:10:45.109075  269883 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:10:45.109765  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:10:45.125615  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:10:45.141690  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:10:45.158417  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/no-preload-20220701225718-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 23:10:45.174871  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:10:45.191499  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:10:45.207611  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:10:45.223735  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:10:45.240344  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:10:45.256863  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:10:45.273492  269883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:10:45.289914  269883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:10:45.301974  269883 ssh_runner.go:195] Run: openssl version
	I0701 23:10:45.306905  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:10:45.314511  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.317377  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.317418  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:10:45.322125  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:10:45.328948  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:10:45.335846  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.338716  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.338814  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:10:45.343304  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:10:45.350375  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:10:45.357390  269883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.360175  269883 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.360212  269883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:10:45.364513  269883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:10:45.370833  269883 kubeadm.go:395] StartCluster: {Name:no-preload-20220701225718-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:no-preload-20220701225718-10066 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:10:45.370926  269883 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:10:45.370953  269883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:10:45.394911  269883 cri.go:87] found id: "cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	I0701 23:10:45.394940  269883 cri.go:87] found id: "b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8"
	I0701 23:10:45.394947  269883 cri.go:87] found id: "ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8"
	I0701 23:10:45.394953  269883 cri.go:87] found id: "9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012"
	I0701 23:10:45.394959  269883 cri.go:87] found id: "6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462"
	I0701 23:10:45.394966  269883 cri.go:87] found id: "b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228"
	I0701 23:10:45.394971  269883 cri.go:87] found id: ""
	I0701 23:10:45.395004  269883 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:10:45.407224  269883 cri.go:114] JSON = null
	W0701 23:10:45.407274  269883 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:10:45.407316  269883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:10:45.413788  269883 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:10:45.413811  269883 kubeadm.go:626] restartCluster start
	I0701 23:10:45.413848  269883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:10:45.419941  269883 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.420556  269883 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220701225718-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:10:45.420886  269883 kubeconfig.go:127] "no-preload-20220701225718-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:10:45.421418  269883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:10:45.422688  269883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:10:45.428759  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.428807  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.436036  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.636442  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.636498  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.645173  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:45.836479  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:45.836560  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:45.845558  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.036840  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.036996  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.045508  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.236821  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.236886  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.245242  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.436407  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.436476  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.445374  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.636693  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.636776  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.645429  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:46.836720  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:46.836780  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:46.845765  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.037048  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.037122  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.045534  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.236841  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.236919  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.245338  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.436619  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.436682  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.445831  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.637106  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.637177  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.646000  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:47.836229  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:47.836305  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:47.844891  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.036112  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.036194  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.044872  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.237166  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.237244  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.245689  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.437095  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.437163  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.446079  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.446102  269883 api_server.go:165] Checking apiserver status ...
	I0701 23:10:48.446147  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:10:48.453958  269883 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.453982  269883 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:10:48.453989  269883 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:10:48.454005  269883 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:10:48.454064  269883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:10:48.477691  269883 cri.go:87] found id: "cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602"
	I0701 23:10:48.477710  269883 cri.go:87] found id: "b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8"
	I0701 23:10:48.477717  269883 cri.go:87] found id: "ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8"
	I0701 23:10:48.477722  269883 cri.go:87] found id: "9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012"
	I0701 23:10:48.477728  269883 cri.go:87] found id: "6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462"
	I0701 23:10:48.477734  269883 cri.go:87] found id: "b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228"
	I0701 23:10:48.477740  269883 cri.go:87] found id: ""
	I0701 23:10:48.477744  269883 cri.go:232] Stopping containers: [cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602 b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8 ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8 9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012 6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462 b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228]
	I0701 23:10:48.477788  269883 ssh_runner.go:195] Run: which crictl
	I0701 23:10:48.480366  269883 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop cd8a4893d2d37f88b6ee26a4705535a3502bdbcdfddf31e9b59a9eb28afdc602 b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8 ac546802283136693ffa6f232baccec2dc6bf324f6201f319db62ceb6e58dae8 9f4bd4048f717703723402dc2bd31164255acb9e131a3e8a35d47bf250201012 6af50f79ce840cbfb4c4974a97d45453382d577490ba8a05d4aa2d0412fbf462 b90cae4e4b7ea37050f35e70b6b5bc7d1c0d2b243219ea44859d2ac0853bb228
	I0701 23:10:48.505890  269883 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:10:48.515195  269883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:10:48.521761  269883 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 22:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul  1 22:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul  1 22:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul  1 22:57 /etc/kubernetes/scheduler.conf
	
	I0701 23:10:48.521807  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 23:10:48.527978  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 23:10:48.534409  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 23:10:48.540704  269883 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.540749  269883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:10:48.547734  269883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 23:10:48.555417  269883 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:10:48.555456  269883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:10:48.561653  269883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:10:48.568679  269883 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:10:48.568731  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:48.610822  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.481354  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.661389  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.719236  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:49.825158  269883 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:10:49.825270  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.335318  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.834701  269883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:10:50.846390  269883 api_server.go:71] duration metric: took 1.021235424s to wait for apiserver process to appear ...
	I0701 23:10:50.846420  269883 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:10:50.846431  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:50.846825  269883 api_server.go:256] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0701 23:10:51.347542  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.133900  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 23:10:54.133986  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 23:10:54.347164  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.351414  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:10:54.351438  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:10:54.847723  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:54.852128  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:10:54.852158  269883 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:10:55.347708  269883 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0701 23:10:55.352265  269883 api_server.go:266] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0701 23:10:55.358013  269883 api_server.go:140] control plane version: v1.24.2
	I0701 23:10:55.358035  269883 api_server.go:130] duration metric: took 4.511609554s to wait for apiserver health ...
	I0701 23:10:55.358045  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:10:55.358050  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:10:55.360161  269883 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:10:55.361441  269883 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:10:55.364979  269883 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:10:55.364998  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:10:55.377645  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:10:56.166732  269883 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:10:56.173254  269883 system_pods.go:59] 9 kube-system pods found
	I0701 23:10:56.173284  269883 system_pods.go:61] "coredns-6d4b75cb6d-mbfz4" [2ba91f90-b153-4f32-8309-108f0c8156db] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173292  269883 system_pods.go:61] "etcd-no-preload-20220701225718-10066" [eb03d3be-2878-4ae8-9dfc-5a4fccffca06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:10:56.173300  269883 system_pods.go:61] "kindnet-b5wkl" [bc770683-78b7-449f-a0af-5a2cc006275c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:10:56.173308  269883 system_pods.go:61] "kube-apiserver-no-preload-20220701225718-10066" [83390193-15db-49db-9ca3-065ebded60a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0701 23:10:56.173317  269883 system_pods.go:61] "kube-controller-manager-no-preload-20220701225718-10066" [086fda3b-1ef9-4e42-944f-4c20bbde78b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:10:56.173323  269883 system_pods.go:61] "kube-proxy-5ck82" [1b54a384-18b1-4c4f-84ab-fe3f8d2c3100] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 23:10:56.173328  269883 system_pods.go:61] "kube-scheduler-no-preload-20220701225718-10066" [87e67937-d3d1-47f6-9ee3-cb47460c5a96] Running
	I0701 23:10:56.173334  269883 system_pods.go:61] "metrics-server-5c6f97fb75-hqds8" [8c904dd9-6f61-494f-9ce0-b1e79f7a8f32] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173344  269883 system_pods.go:61] "storage-provisioner" [fb659ca7-b379-4467-bf65-4ae7b8b0b2a9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:10:56.173348  269883 system_pods.go:74] duration metric: took 6.593831ms to wait for pod list to return data ...
	I0701 23:10:56.173354  269883 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:10:56.175724  269883 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:10:56.175753  269883 node_conditions.go:123] node cpu capacity is 8
	I0701 23:10:56.175768  269883 node_conditions.go:105] duration metric: took 2.40915ms to run NodePressure ...
	I0701 23:10:56.175789  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:10:56.319373  269883 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:10:56.323915  269883 kubeadm.go:777] kubelet initialised
	I0701 23:10:56.323936  269883 kubeadm.go:778] duration metric: took 4.537399ms waiting for restarted kubelet to initialise ...
	I0701 23:10:56.323943  269883 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:10:56.329062  269883 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" ...
	I0701 23:10:58.335246  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:00.835256  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:03.334510  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:05.835173  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:08.334350  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:10.334372  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:12.335344  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:14.834372  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:16.834442  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:18.835141  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:21.335290  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:23.835104  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:25.835407  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:28.334721  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:30.335044  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:32.834615  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:34.834740  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:36.835246  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:39.334127  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:41.334784  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:43.335046  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:45.335120  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:47.835290  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:50.334959  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:52.834501  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:55.335057  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:57.335198  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:11:59.835098  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:02.334947  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:04.335004  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:06.834679  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:08.834990  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:11.334922  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:13.834714  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:16.335295  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:18.834428  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:20.834730  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:22.834980  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:24.835325  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:26.835428  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:29.335427  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:31.834460  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:33.835479  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:36.335438  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:38.834828  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:41.335293  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:43.834535  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:46.334601  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:48.335004  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:50.335143  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:52.834795  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:55.334932  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:57.335218  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:12:59.834795  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:02.335059  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:04.834809  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:06.835212  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:09.334808  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:11.334961  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:13.335410  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:15.835854  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:18.334779  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:20.334805  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:22.834591  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:25.335180  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:27.835327  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:30.334829  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:32.334883  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:34.335098  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.335670  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:38.835336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.334767  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:43.334801  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:45.834614  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:47.835771  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:50.335036  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:52.834973  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:55.334779  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:57.835309  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:59.835489  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:02.335364  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:04.335402  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:06.335651  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.834525  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:11.335287  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:13.834432  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.835251  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:18.334462  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:20.334833  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:22.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:24.335241  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.834599  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:29.334764  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:31.834659  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:33.835115  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:36.334527  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:38.834645  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.335647  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.834536  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:45.835152  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.335336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.834828  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:55.334778  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:56.331712  269883 pod_ready.go:81] duration metric: took 4m0.0026135s waiting for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" ...
	E0701 23:14:56.331755  269883 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:14:56.331779  269883 pod_ready.go:38] duration metric: took 4m0.007826908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:56.331809  269883 kubeadm.go:630] restartCluster took 4m10.917993696s
	W0701 23:14:56.331941  269883 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:14:56.331974  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:14:57.984431  269883 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.65243003s)
	I0701 23:14:57.984496  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:14:57.994269  269883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:14:58.001094  269883 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:14:58.001159  269883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:14:58.007683  269883 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:14:58.007734  269883 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:15:06.936698  269883 out.go:204]   - Generating certificates and keys ...
	I0701 23:15:06.939424  269883 out.go:204]   - Booting up control plane ...
	I0701 23:15:06.941904  269883 out.go:204]   - Configuring RBAC rules ...
	I0701 23:15:06.944403  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:15:06.944429  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:15:06.945976  269883 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:15:06.947445  269883 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:15:06.951630  269883 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:15:06.951650  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:15:06.966756  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:15:07.699280  269883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:15:07.699401  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.699419  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=no-preload-20220701225718-10066 minikube.k8s.io/updated_at=2022_07_01T23_15_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.706386  269883 ops.go:34] apiserver oom_adj: -16
	I0701 23:15:07.765556  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.338006  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.838005  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.337996  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.837437  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.337629  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.837363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.337763  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.838075  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.338080  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.837649  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:13.337387  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:13.838035  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.337961  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.838063  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.338241  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.837500  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.337613  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.838363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.337701  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.838061  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.337742  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.838306  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.337570  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.837680  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.892044  269883 kubeadm.go:1045] duration metric: took 12.192690701s to wait for elevateKubeSystemPrivileges.
	I0701 23:15:19.892072  269883 kubeadm.go:397] StartCluster complete in 4m34.521249474s
	I0701 23:15:19.892091  269883 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:19.892193  269883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:15:19.893038  269883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:20.407163  269883 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220701225718-10066" rescaled to 1
	I0701 23:15:20.407233  269883 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:15:20.409054  269883 out.go:177] * Verifying Kubernetes components...
	I0701 23:15:20.407277  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:15:20.407307  269883 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:15:20.407455  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:15:20.410261  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:15:20.410307  269883 addons.go:65] Setting dashboard=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410316  269883 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410322  269883 addons.go:65] Setting metrics-server=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410331  269883 addons.go:153] Setting addon dashboard=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.410333  269883 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220701225718-10066"
	W0701 23:15:20.410339  269883 addons.go:162] addon dashboard should already be in state true
	I0701 23:15:20.410339  269883 addons.go:153] Setting addon metrics-server=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410348  269883 addons.go:162] addon metrics-server should already be in state true
	I0701 23:15:20.410378  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410384  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410308  269883 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410415  269883 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410428  269883 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:15:20.410464  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410690  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410883  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410898  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410944  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.462647  269883 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.462859  269883 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.464095  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:15:20.464150  269883 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:15:20.464162  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:15:20.464109  269883 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0701 23:15:20.464170  269883 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:15:20.465490  269883 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.466852  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:15:20.466866  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:15:20.465507  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.466910  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.468347  269883 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.468364  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:15:20.468412  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.465559  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.467550  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.497855  269883 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:15:20.497910  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:15:20.515144  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.520029  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.522289  269883 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.522310  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:15:20.522357  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.524783  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.568239  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.635327  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.635528  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:15:20.635546  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:15:20.635773  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:15:20.635792  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:15:20.720153  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:15:20.720184  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:15:20.720330  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:15:20.720356  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:15:20.735914  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:15:20.735942  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:15:20.738036  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.738058  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:15:20.751468  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:15:20.751494  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:15:20.751989  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.830998  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:15:20.831029  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:15:20.835184  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.919071  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:15:20.919097  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:15:20.931803  269883 start.go:809] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0701 23:15:20.938634  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:15:20.938663  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:15:21.027932  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:15:21.027961  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:15:21.120018  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.120044  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:15:21.139289  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.542831  269883 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220701225718-10066"
	I0701 23:15:22.318204  269883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.178852341s)
	I0701 23:15:22.320260  269883 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0701 23:15:22.321764  269883 addons.go:414] enableAddons completed in 1.914474598s
	I0701 23:15:22.506049  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:25.003072  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:27.003942  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:29.503567  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:31.503801  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:33.504159  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:35.504602  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:38.003422  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:40.504288  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:42.504480  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:44.504514  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:47.002872  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:49.003639  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:51.503660  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:53.503915  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:56.003212  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:58.003807  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:00.504360  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:03.003336  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:05.503324  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:07.503773  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:10.003039  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:12.003124  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:14.504207  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:17.003682  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:19.503321  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:21.503670  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:23.504169  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:26.003440  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:28.503980  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:31.003131  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.003828  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:35.503530  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.503721  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:39.504219  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:42.002779  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:44.003891  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:46.503378  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:48.503897  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:51.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:53.504221  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:56.003927  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:58.503637  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:00.503665  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.504224  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:05.003494  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:07.503949  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:09.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:12.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:14.004090  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:16.503717  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:18.504348  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:21.002849  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:23.003827  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:25.503280  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:27.503458  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:29.503895  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:32.003296  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:34.003684  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:36.504246  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:39.003597  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:41.504297  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:44.003653  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:46.003704  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:48.503830  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:51.002929  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:53.503901  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:56.003435  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:58.503409  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:00.504015  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:05.003477  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.003973  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:09.503332  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:11.504503  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:14.002820  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.003780  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:18.503619  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:20.504289  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:23.003341  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:25.003796  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.504253  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:30.003794  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:32.503813  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:34.504191  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:37.003581  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:39.504225  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.003356  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:44.003625  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:46.504247  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:49.003291  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:51.003453  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:53.504320  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.003487  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:58.504264  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:00.504489  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.003398  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:05.004021  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.503771  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:10.003129  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:12.003382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:14.504382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:17.003939  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:19.503019  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:20.505831  269883 node_ready.go:38] duration metric: took 4m0.007935364s waiting for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:19:20.507971  269883 out.go:177] 
	W0701 23:19:20.509514  269883 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:19:20.509536  269883 out.go:239] * 
	* 
	W0701 23:19:20.510312  269883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:19:20.511951  269883 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-20220701225718-10066 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220701225718-10066
helpers_test.go:235: (dbg) docker inspect no-preload-20220701225718-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff",
	        "Created": "2022-07-01T22:57:20.298940328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T23:10:29.171406497Z",
	            "FinishedAt": "2022-07-01T23:10:27.869046021Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hostname",
	        "HostsPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hosts",
	        "LogPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff-json.log",
	        "Name": "/no-preload-20220701225718-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220701225718-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220701225718-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220701225718-10066",
	                "Source": "/var/lib/docker/volumes/no-preload-20220701225718-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220701225718-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "name.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ace33098aa0f86a5e7c360e6ec28bc842985cefecf875d3cd83a6f829c7d2d7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4ace33098aa0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220701225718-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6714999bf303",
	                        "no-preload-20220701225718-10066"
	                    ],
	                    "NetworkID": "1edec7b6219d6237636ff26267a26187f0ef2e748e4635b07760f0d37cc8596c",
	                    "EndpointID": "115f09d6b4a01169f14b8656811420109ca1c74fd1bdac734e6008c69c7cb092",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220701225718-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:06 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC |                     |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --preload=false                                |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:13:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:13:36.508585  275844 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:13:36.508812  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.508825  275844 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:36.508833  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.509394  275844 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:13:36.509707  275844 out.go:303] Setting JSON to false
	I0701 23:13:36.511123  275844 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3370,"bootTime":1656713847,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:13:36.511210  275844 start.go:125] virtualization: kvm guest
	I0701 23:13:36.513852  275844 out.go:177] * [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:13:36.516346  275844 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:13:36.516221  275844 notify.go:193] Checking for updates...
	I0701 23:13:36.517990  275844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:13:36.519337  275844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:36.520961  275844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:13:36.522517  275844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:13:36.524336  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:36.524783  275844 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:13:36.571678  275844 docker.go:137] docker version: linux-20.10.17
	I0701 23:13:36.571797  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.688003  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.603240517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.688097  275844 docker.go:254] overlay module found
	I0701 23:13:36.689718  275844 out.go:177] * Using the docker driver based on existing profile
	I0701 23:13:36.691073  275844 start.go:284] selected driver: docker
	I0701 23:13:36.691091  275844 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.691176  275844 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:13:36.711421  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.815393  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.741940503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.815669  275844 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:13:36.815700  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:36.815708  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:36.815734  275844 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.817973  275844 out.go:177] * Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	I0701 23:13:36.819338  275844 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:13:36.820691  275844 out.go:177] * Pulling base image ...
	I0701 23:13:36.821863  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:36.821911  275844 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:13:36.821925  275844 cache.go:57] Caching tarball of preloaded images
	I0701 23:13:36.821988  275844 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:13:36.822107  275844 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:13:36.822124  275844 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:13:36.822229  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:36.857028  275844 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:13:36.857061  275844 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:13:36.857085  275844 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:13:36.857128  275844 start.go:352] acquiring machines lock for default-k8s-different-port-20220701230032-10066: {Name:mk7518221e8259d073969ba977a5dbef99fe5935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:13:36.857241  275844 start.go:356] acquired machines lock for "default-k8s-different-port-20220701230032-10066" in 79.413µs
	I0701 23:13:36.857265  275844 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:13:36.857273  275844 fix.go:55] fixHost starting: 
	I0701 23:13:36.857565  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:36.889959  275844 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220701230032-10066: state=Stopped err=<nil>
	W0701 23:13:36.890003  275844 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:13:36.892196  275844 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220701230032-10066" ...
	I0701 23:13:34.335098  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.335670  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.893583  275844 cli_runner.go:164] Run: docker start default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.260876  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:37.298699  275844 kic.go:416] container "default-k8s-different-port-20220701230032-10066" state is running.
	I0701 23:13:37.299071  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.333911  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:37.334149  275844 machine.go:88] provisioning docker machine ...
	I0701 23:13:37.334173  275844 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220701230032-10066"
	I0701 23:13:37.334223  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.368604  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:37.368836  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:37.368867  275844 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220701230032-10066 && echo "default-k8s-different-port-20220701230032-10066" | sudo tee /etc/hostname
	I0701 23:13:37.369499  275844 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35278->127.0.0.1:49442: read: connection reset by peer
	I0701 23:13:40.494516  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220701230032-10066
	
	I0701 23:13:40.494611  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.527972  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:40.528160  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:40.528184  275844 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220701230032-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220701230032-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220701230032-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:13:40.641942  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:13:40.641973  275844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:13:40.642000  275844 ubuntu.go:177] setting up certificates
	I0701 23:13:40.642011  275844 provision.go:83] configureAuth start
	I0701 23:13:40.642064  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.675855  275844 provision.go:138] copyHostCerts
	I0701 23:13:40.675913  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:13:40.675927  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:13:40.675991  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:13:40.676060  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:13:40.676071  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:13:40.676098  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:13:40.676148  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:13:40.676158  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:13:40.676192  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:13:40.676235  275844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220701230032-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220701230032-10066]
	I0701 23:13:40.954393  275844 provision.go:172] copyRemoteCerts
	I0701 23:13:40.954451  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:13:40.954482  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.989611  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.073447  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:13:41.090826  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0701 23:13:41.107547  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 23:13:41.124219  275844 provision.go:86] duration metric: configureAuth took 482.194415ms
	I0701 23:13:41.124245  275844 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:13:41.124417  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:41.124431  275844 machine.go:91] provisioned docker machine in 3.790266635s
	I0701 23:13:41.124441  275844 start.go:306] post-start starting for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:13:41.124452  275844 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:13:41.124510  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:13:41.124554  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.158325  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.245657  275844 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:13:41.248516  275844 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:13:41.248538  275844 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:13:41.248546  275844 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:13:41.248551  275844 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:13:41.248559  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:13:41.248598  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:13:41.248664  275844 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:13:41.248742  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:13:41.255535  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:41.272444  275844 start.go:309] post-start completed in 147.990653ms
	I0701 23:13:41.272501  275844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:13:41.272534  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.306973  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.391227  275844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:13:41.395145  275844 fix.go:57] fixHost completed within 4.53786816s
	I0701 23:13:41.395167  275844 start.go:81] releasing machines lock for "default-k8s-different-port-20220701230032-10066", held for 4.537914302s
	I0701 23:13:41.395240  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428938  275844 ssh_runner.go:195] Run: systemctl --version
	I0701 23:13:41.428983  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428986  275844 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:13:41.429036  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.463442  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.464061  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:38.835336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.334767  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:43.334801  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.546236  275844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:13:41.557434  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:13:41.566944  275844 docker.go:179] disabling docker service ...
	I0701 23:13:41.566994  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:13:41.575898  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:13:41.584165  275844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:13:41.651388  275844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:13:41.723308  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:13:41.731887  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:13:41.744366  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.752324  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.760056  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.767864  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.775399  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:13:41.782555  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:13:41.794357  275844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:13:41.800246  275844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:13:41.806090  275844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:13:41.881056  275844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:13:41.950865  275844 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:13:41.950932  275844 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:13:41.955104  275844 start.go:471] Will wait 60s for crictl version
	I0701 23:13:41.955155  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:41.981690  275844 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:13:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:13:45.834614  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:47.835771  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:53.029041  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:53.051421  275844 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:13:53.051470  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.078982  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.109597  275844 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:13:50.335036  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:52.834973  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:53.110955  275844 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:13:53.143106  275844 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 23:13:53.146306  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.155228  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:53.155287  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.177026  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.177047  275844 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:13:53.177094  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.198475  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.198501  275844 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:13:53.198643  275844 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:13:53.221518  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:53.221540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:53.221552  275844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:13:53.221564  275844 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220701230032-10066 NodeName:default-k8s-different-port-20220701230032-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:13:53.221715  275844 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220701230032-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:13:53.221814  275844 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220701230032-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0701 23:13:53.221875  275844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:13:53.228898  275844 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:13:53.228952  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:13:53.235366  275844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0701 23:13:53.247371  275844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:13:53.259313  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0701 23:13:53.271530  275844 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:13:53.274142  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.282892  275844 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066 for IP: 192.168.76.2
	I0701 23:13:53.282980  275844 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:13:53.283015  275844 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:13:53.283078  275844 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key
	I0701 23:13:53.283124  275844 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25
	I0701 23:13:53.283163  275844 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key
	I0701 23:13:53.283252  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:13:53.283280  275844 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:13:53.283295  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:13:53.283320  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:13:53.283343  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:13:53.283367  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:13:53.283409  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:53.283939  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:13:53.300388  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:13:53.317215  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:13:53.333335  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0701 23:13:53.349529  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:13:53.365494  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:13:53.381103  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:13:53.396977  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:13:53.412881  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:13:53.429709  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:13:53.446017  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:13:53.461814  275844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:13:53.473437  275844 ssh_runner.go:195] Run: openssl version
	I0701 23:13:53.478032  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:13:53.484818  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487660  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487710  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.492105  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:13:53.498584  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:13:53.505448  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508315  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508365  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.512833  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:13:53.519315  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:13:53.526653  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529618  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529700  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.534593  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:13:53.541972  275844 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-1006
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:53.542071  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:13:53.542137  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:53.565066  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:53.565094  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:53.565103  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:53.565110  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:53.565115  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:53.565121  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:53.565127  275844 cri.go:87] found id: ""
	I0701 23:13:53.565155  275844 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:13:53.577099  275844 cri.go:114] JSON = null
	W0701 23:13:53.577140  275844 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:13:53.577183  275844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:13:53.583727  275844 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:13:53.583745  275844 kubeadm.go:626] restartCluster start
	I0701 23:13:53.583773  275844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:13:53.589812  275844 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.590282  275844 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220701230032-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:53.590469  275844 kubeconfig.go:127] "default-k8s-different-port-20220701230032-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:13:53.590950  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:13:53.592051  275844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:13:53.598266  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.598304  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.605628  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.806026  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.806089  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.814576  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.005749  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.005835  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.013967  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.206355  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.206416  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.215350  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.406581  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.406651  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.415525  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.605755  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.605834  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.614602  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.805813  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.805894  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.814430  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.006748  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.006824  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.015390  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.206606  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.206712  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.215161  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.406468  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.406570  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.415209  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.606590  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.606691  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.615437  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.806738  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.806828  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.815002  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.006349  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.006435  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.014726  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.205912  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.205993  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.214477  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.405750  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.405831  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.414060  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.334779  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:57.835309  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:56.606652  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.606715  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.615356  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.615374  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.615402  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.623156  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.623180  275844 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:13:56.623187  275844 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:13:56.623201  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:13:56.623258  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:56.649113  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:56.649133  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:56.649140  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:56.649146  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:56.649152  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:56.649158  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:56.649164  275844 cri.go:87] found id: ""
	I0701 23:13:56.649169  275844 cri.go:232] Stopping containers: [e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c]
	I0701 23:13:56.649212  275844 ssh_runner.go:195] Run: which crictl
	I0701 23:13:56.652179  275844 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c
	I0701 23:13:56.676014  275844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:13:56.685537  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:13:56.692196  275844 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 23:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul  1 23:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul  1 23:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul  1 23:00 /etc/kubernetes/scheduler.conf
	
	I0701 23:13:56.692247  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0701 23:13:56.698641  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0701 23:13:56.704856  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.711153  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.711210  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.717322  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0701 23:13:56.723423  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.723459  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:13:56.729312  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736598  275844 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736617  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:56.781688  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.445598  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.633371  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.679946  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.749368  275844 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:13:57.749432  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.318180  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.818690  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.830974  275844 api_server.go:71] duration metric: took 1.081606586s to wait for apiserver process to appear ...
	I0701 23:13:58.831001  275844 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:13:58.831034  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:13:58.831436  275844 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0701 23:13:59.331708  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:01.921615  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:01.921654  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.332201  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.336755  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.336792  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.831892  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.836248  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.836275  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:03.331795  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:03.337047  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0701 23:14:03.343503  275844 api_server.go:140] control plane version: v1.24.2
	I0701 23:14:03.343525  275844 api_server.go:130] duration metric: took 4.512518171s to wait for apiserver health ...
	I0701 23:14:03.343535  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:14:03.343540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:14:03.345598  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:13:59.835489  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:02.335364  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:03.347224  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:14:03.350686  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:14:03.350707  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:14:03.363866  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:14:04.295415  275844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:14:04.301798  275844 system_pods.go:59] 9 kube-system pods found
	I0701 23:14:04.301825  275844 system_pods.go:61] "coredns-6d4b75cb6d-zmnqs" [f0e0d22f-cd83-4531-8778-32070816b159] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301837  275844 system_pods.go:61] "etcd-default-k8s-different-port-20220701230032-10066" [c4b3993a-3a6c-4827-8250-b951a48b9432] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:14:04.301844  275844 system_pods.go:61] "kindnet-49h72" [bee4a070-eb2f-45af-a824-f8ebb08e21cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:14:04.301851  275844 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220701230032-10066" [2ce9acd5-e8e7-425b-bb9b-5dd480397910] Running
	I0701 23:14:04.301860  275844 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220701230032-10066" [2fec1fad-34c5-4b47-8713-8e789b816ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:14:04.301868  275844 system_pods.go:61] "kube-proxy-qg5j2" [c67a38f9-ae75-40ea-8992-85a437368c50] Running
	I0701 23:14:04.301873  275844 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220701230032-10066" [49056cd0-4107-4377-ba51-b97af35cbe72] Running
	I0701 23:14:04.301882  275844 system_pods.go:61] "metrics-server-5c6f97fb75-mkq9q" [f5b66095-14d2-4de4-9f1d-2cd5371ec0fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301890  275844 system_pods.go:61] "storage-provisioner" [6e0344bb-c7de-41f4-95d2-f30576ae036c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301898  275844 system_pods.go:74] duration metric: took 6.458628ms to wait for pod list to return data ...
	I0701 23:14:04.301907  275844 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:14:04.304305  275844 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:14:04.304330  275844 node_conditions.go:123] node cpu capacity is 8
	I0701 23:14:04.304343  275844 node_conditions.go:105] duration metric: took 2.432316ms to run NodePressure ...
	I0701 23:14:04.304363  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:14:04.434166  275844 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438097  275844 kubeadm.go:777] kubelet initialised
	I0701 23:14:04.438123  275844 kubeadm.go:778] duration metric: took 3.933976ms waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438131  275844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:04.443068  275844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	I0701 23:14:06.448162  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:04.335402  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:06.335651  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.448866  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:10.948772  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.834525  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:11.335287  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:12.949108  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.448393  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:13.834432  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.835251  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:18.334462  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:17.948235  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:19.948671  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:20.334833  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:22.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:21.948914  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:23.949013  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:24.335241  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.834599  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:28.948377  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:30.948441  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:29.334764  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:31.834659  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:32.948974  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:35.448453  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:33.835115  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:36.334527  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:37.448971  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:39.449007  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:38.834645  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.335647  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.948832  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.948861  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:46.448244  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.834536  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:45.835152  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.448469  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.448941  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.335336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.834828  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.948268  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:54.948294  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:55.334778  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:56.331712  269883 pod_ready.go:81] duration metric: took 4m0.0026135s waiting for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" ...
	E0701 23:14:56.331755  269883 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:14:56.331779  269883 pod_ready.go:38] duration metric: took 4m0.007826908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:56.331809  269883 kubeadm.go:630] restartCluster took 4m10.917993696s
	W0701 23:14:56.331941  269883 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:14:56.331974  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:14:57.984431  269883 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.65243003s)
	I0701 23:14:57.984496  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:14:57.994269  269883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:14:58.001094  269883 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:14:58.001159  269883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:14:58.007683  269883 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:14:58.007734  269883 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:14:56.949272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:58.949543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:01.449627  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:03.950758  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.936698  269883 out.go:204]   - Generating certificates and keys ...
	I0701 23:15:06.939424  269883 out.go:204]   - Booting up control plane ...
	I0701 23:15:06.941904  269883 out.go:204]   - Configuring RBAC rules ...
	I0701 23:15:06.944403  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:15:06.944429  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:15:06.945976  269883 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:15:06.947445  269883 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:15:06.951630  269883 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:15:06.951650  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:15:06.966756  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:15:07.699280  269883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:15:07.699401  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.699419  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=no-preload-20220701225718-10066 minikube.k8s.io/updated_at=2022_07_01T23_15_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.706386  269883 ops.go:34] apiserver oom_adj: -16
	I0701 23:15:07.765556  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.338006  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.448681  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:10.448820  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:08.838005  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.337996  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.837437  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.337629  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.837363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.337763  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.838075  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.338080  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.837649  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:13.337387  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.449226  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:14.948189  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:13.838035  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.337961  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.838063  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.338241  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.837500  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.337613  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.838363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.337701  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.838061  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.337742  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.838306  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.337570  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.837680  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.892044  269883 kubeadm.go:1045] duration metric: took 12.192690701s to wait for elevateKubeSystemPrivileges.
	I0701 23:15:19.892072  269883 kubeadm.go:397] StartCluster complete in 4m34.521249474s
	I0701 23:15:19.892091  269883 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:19.892193  269883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:15:19.893038  269883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:20.407163  269883 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220701225718-10066" rescaled to 1
	I0701 23:15:20.407233  269883 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:15:20.409054  269883 out.go:177] * Verifying Kubernetes components...
	I0701 23:15:20.407277  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:15:20.407307  269883 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:15:20.407455  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:15:20.410261  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:15:20.410307  269883 addons.go:65] Setting dashboard=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410316  269883 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410322  269883 addons.go:65] Setting metrics-server=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410331  269883 addons.go:153] Setting addon dashboard=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.410333  269883 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220701225718-10066"
	W0701 23:15:20.410339  269883 addons.go:162] addon dashboard should already be in state true
	I0701 23:15:20.410339  269883 addons.go:153] Setting addon metrics-server=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410348  269883 addons.go:162] addon metrics-server should already be in state true
	I0701 23:15:20.410378  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410384  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410308  269883 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410415  269883 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410428  269883 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:15:20.410464  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410690  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410883  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410898  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410944  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.462647  269883 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.462859  269883 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.464095  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:15:20.464150  269883 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:15:20.464162  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:15:20.464109  269883 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0701 23:15:20.464170  269883 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:15:20.465490  269883 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.466852  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:15:20.466866  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:15:20.465507  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.466910  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:16.948842  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:18.949526  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:21.448543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:20.468347  269883 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.468364  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:15:20.468412  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.465559  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.467550  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.497855  269883 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:15:20.497910  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:15:20.515144  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.520029  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.522289  269883 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.522310  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:15:20.522357  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.524783  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.568239  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.635327  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.635528  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:15:20.635546  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:15:20.635773  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:15:20.635792  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:15:20.720153  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:15:20.720184  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:15:20.720330  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:15:20.720356  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:15:20.735914  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:15:20.735942  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:15:20.738036  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.738058  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:15:20.751468  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:15:20.751494  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:15:20.751989  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.830998  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:15:20.831029  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:15:20.835184  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.919071  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:15:20.919097  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:15:20.931803  269883 start.go:809] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0701 23:15:20.938634  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:15:20.938663  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:15:21.027932  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:15:21.027961  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:15:21.120018  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.120044  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:15:21.139289  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.542831  269883 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220701225718-10066"
	I0701 23:15:22.318204  269883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.178852341s)
	I0701 23:15:22.320260  269883 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0701 23:15:22.321764  269883 addons.go:414] enableAddons completed in 1.914474598s
	I0701 23:15:22.506049  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:23.449129  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.948784  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.003072  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:27.003942  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:28.448748  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:30.948490  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:29.503567  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:31.503801  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:33.448177  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:35.948336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:33.504159  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:35.504602  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:38.003422  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:37.948379  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:39.948560  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:40.504288  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:42.504480  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:41.949060  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:43.949319  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:46.449018  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:44.504514  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:47.002872  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:48.948340  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:51.448205  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:49.003639  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:51.503660  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:53.448249  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:55.448938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:53.503915  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:56.003212  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:58.003807  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:57.948938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.448920  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.504360  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:03.003336  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:02.449149  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:04.449385  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:05.503324  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:07.503773  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:06.948721  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:09.448775  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:10.003039  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:12.003124  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:11.948462  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.448466  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:16.449003  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.504207  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:17.003682  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:18.948883  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:21.448510  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:19.503321  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:21.503670  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:23.949051  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:26.448494  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:23.504169  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:26.003440  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:28.448711  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:30.950336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:28.503980  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:31.003131  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.003828  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.448272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.448817  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.503530  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.503721  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.449097  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.948158  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.504219  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:42.002779  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:41.948654  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:43.948719  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:46.448800  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:44.003891  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:46.503378  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:48.948666  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:50.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:48.503897  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:51.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:53.448686  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:55.948675  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:53.504221  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:56.003927  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:58.448263  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:00.948090  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:58.503637  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:00.503665  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.504224  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.948518  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:04.948735  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:05.003494  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:07.503949  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:06.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.448480  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:11.448536  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:12.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:13.448566  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:15.948312  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:14.004090  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:16.503717  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:17.948940  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:20.449080  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:18.504348  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:21.002849  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:23.003827  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:22.948356  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:24.949063  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:25.503280  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:27.503458  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:26.949277  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.448968  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.503895  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:32.003296  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:31.948774  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:33.948802  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:36.448693  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:34.003684  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:36.504246  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:38.948200  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:41.449095  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:39.003597  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:41.504297  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:43.948596  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:46.448338  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:44.003653  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:46.003704  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:48.448406  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:50.449049  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:48.503830  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:51.002929  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:52.949418  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:55.448267  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:53.503901  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:56.003435  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:57.948337  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:59.949522  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:58.503409  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:00.504015  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.449005  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:18:04.445635  275844 pod_ready.go:81] duration metric: took 4m0.002536043s waiting for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	E0701 23:18:04.445658  275844 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:18:04.445676  275844 pod_ready.go:38] duration metric: took 4m0.00753476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:18:04.445715  275844 kubeadm.go:630] restartCluster took 4m10.861963713s
	W0701 23:18:04.445855  275844 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:18:04.445882  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:18:06.095490  275844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.649588457s)
	I0701 23:18:06.095547  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:06.104815  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:18:06.112334  275844 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:18:06.112376  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:18:06.119483  275844 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:18:06.119534  275844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:18:06.370658  275844 out.go:204]   - Generating certificates and keys ...
	I0701 23:18:05.003477  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.003973  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.277086  275844 out.go:204]   - Booting up control plane ...
	I0701 23:18:09.503332  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:11.504503  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:14.316275  275844 out.go:204]   - Configuring RBAC rules ...
	I0701 23:18:14.730162  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:18:14.730189  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:18:14.731634  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:18:14.732857  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:18:14.739597  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:18:14.739622  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:18:14.825236  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:18:15.561507  275844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:18:15.561626  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.561637  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066 minikube.k8s.io/updated_at=2022_07_01T23_18_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.568394  275844 ops.go:34] apiserver oom_adj: -16
	I0701 23:18:15.634685  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:16.190642  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:14.002820  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.003780  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.690023  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.190952  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.690163  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.191022  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.690054  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.190723  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.690097  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.190968  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.691032  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:21.190434  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.503619  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:20.504289  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:23.003341  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:21.690038  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.190938  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.690621  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.190651  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.690833  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.190934  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.690962  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.190256  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.690333  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.190101  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.690887  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.190074  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.248216  275844 kubeadm.go:1045] duration metric: took 11.686670316s to wait for elevateKubeSystemPrivileges.
	I0701 23:18:27.248246  275844 kubeadm.go:397] StartCluster complete in 4m33.70628023s
	I0701 23:18:27.248264  275844 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.248355  275844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:18:27.249185  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.763199  275844 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220701230032-10066" rescaled to 1
	I0701 23:18:27.763267  275844 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:18:27.766618  275844 out.go:177] * Verifying Kubernetes components...
	I0701 23:18:27.763306  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:18:27.763330  275844 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:18:27.766747  275844 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766765  275844 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766778  275844 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:18:27.766806  275844 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766825  275844 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766828  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.763473  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:18:27.766824  275844 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768481  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:27.768504  275844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766835  275844 addons.go:162] addon dashboard should already be in state true
	I0701 23:18:27.768632  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.766843  275844 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768713  275844 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.768733  275844 addons.go:162] addon metrics-server should already be in state true
	I0701 23:18:27.768768  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.767332  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.768887  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769184  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769187  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.831262  275844 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:18:27.832550  275844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:18:27.833969  275844 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:27.833992  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:18:27.834040  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.835526  275844 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:18:27.833023  275844 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.837673  275844 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0701 23:18:27.837677  275844 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:18:25.003796  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.504253  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.837692  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:18:27.839084  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:18:27.839099  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:18:27.839108  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:18:27.839153  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.837723  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.839164  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.839691  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.856622  275844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:18:27.856645  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:18:27.890091  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.891200  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.895622  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.896930  275844 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:27.896946  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:18:27.896980  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.937496  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:28.136017  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:28.136703  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:28.139953  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:18:28.139977  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:18:28.144217  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:18:28.144239  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:18:28.234055  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:18:28.234083  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:18:28.318902  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:18:28.318936  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:18:28.336787  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:18:28.336818  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:18:28.423063  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.423089  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:18:28.427844  275844 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0701 23:18:28.432989  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:18:28.433019  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:18:28.442227  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.523695  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:18:28.523727  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:18:28.618333  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:18:28.618365  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:18:28.636855  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:18:28.636885  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:18:28.652952  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:18:28.652974  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:18:28.739775  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:28.739814  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:18:28.832453  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:29.251359  275844 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:29.544427  275844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0701 23:18:29.545959  275844 addons.go:414] enableAddons completed in 1.78263451s
	I0701 23:18:29.863227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:30.003794  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:32.503813  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:31.863254  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.363382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:36.363413  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.504191  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:37.003581  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:38.363717  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:40.863294  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:39.504225  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.003356  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.863457  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:45.363613  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:44.003625  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:46.504247  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:47.863096  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.863849  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.003291  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:51.003453  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:52.363545  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:54.363732  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:53.504320  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.003487  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.862624  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.863111  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:00.863425  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.504264  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:00.504489  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.003398  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.363680  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.363957  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.004021  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.503771  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.364035  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:09.364588  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:10.003129  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:12.003382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:11.863661  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.362895  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:16.363322  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.504382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:17.003939  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:19.503019  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:20.505831  269883 node_ready.go:38] duration metric: took 4m0.007935364s waiting for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:19:20.507971  269883 out.go:177] 
	W0701 23:19:20.509514  269883 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:19:20.509536  269883 out.go:239] * 
	W0701 23:19:20.510312  269883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:19:20.511951  269883 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	7a29398760697       6fb66cd78abfe       About a minute ago   Running             kindnet-cni               1                   87b8e95bd6528
	b44647d75d006       6fb66cd78abfe       4 minutes ago        Exited              kindnet-cni               0                   87b8e95bd6528
	b49ce69e2c582       a634548d10b03       4 minutes ago        Running             kube-proxy                0                   b58699e3af072
	becf96e8231dc       aebe758cef4cd       4 minutes ago        Running             etcd                      2                   2f69dd21fb9f2
	ab7802906a7b0       d3377ffb7177c       4 minutes ago        Running             kube-apiserver            2                   55afd0afff51f
	0efd5173ba061       34cdf99b1bb3b       4 minutes ago        Running             kube-controller-manager   2                   86e1c2cbbd62a
	62574f0759001       5d725196c1f47       4 minutes ago        Running             kube-scheduler            2                   96e658a134b04
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 23:10:29 UTC, end at Fri 2022-07-01 23:19:21 UTC. --
	Jul 01 23:15:19 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:19.948285189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 23:15:19 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:19.948298315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 23:15:19 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:19.948547246Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104 pid=3455 runtime=io.containerd.runc.v2
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.006024609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5mclw,Uid:994d6280-4cf7-4b84-9732-da9a458e6f45,Namespace:kube-system,Attempt:0,} returns sandbox id \"b58699e3af072cafbfce0168ca8012f650bb681a30e60b1622a32fc5fea15200\""
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.008576478Z" level=info msg="CreateContainer within sandbox \"b58699e3af072cafbfce0168ca8012f650bb681a30e60b1622a32fc5fea15200\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.022929219Z" level=info msg="CreateContainer within sandbox \"b58699e3af072cafbfce0168ca8012f650bb681a30e60b1622a32fc5fea15200\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b49ce69e2c58257158688cb2227d8b0dda1b0aa00833f9b3b42a772e1765f35b\""
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.023450933Z" level=info msg="StartContainer for \"b49ce69e2c58257158688cb2227d8b0dda1b0aa00833f9b3b42a772e1765f35b\""
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.087470112Z" level=info msg="StartContainer for \"b49ce69e2c58257158688cb2227d8b0dda1b0aa00833f9b3b42a772e1765f35b\" returns successfully"
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.218759548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-7kwfz,Uid:c2ec92c0-3a08-45d8-aeb8-b2a4b5cb6e2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\""
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.221597305Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.233840739Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"b44647d75d006da4be7e60086562915fbe16a84409b56d9fce3085c750f919d9\""
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.234417421Z" level=info msg="StartContainer for \"b44647d75d006da4be7e60086562915fbe16a84409b56d9fce3085c750f919d9\""
	Jul 01 23:15:20 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:15:20.333652243Z" level=info msg="StartContainer for \"b44647d75d006da4be7e60086562915fbe16a84409b56d9fce3085c750f919d9\" returns successfully"
	Jul 01 23:16:06 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:16:06.824570383Z" level=error msg="ContainerStatus for \"549e814f9bb6670500810d899a0d864265a5e8f562cb6592b8e0e1581a63c836\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"549e814f9bb6670500810d899a0d864265a5e8f562cb6592b8e0e1581a63c836\": not found"
	Jul 01 23:16:06 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:16:06.825222248Z" level=error msg="ContainerStatus for \"a069ef780eee228f91f4c53a2bcc95d35c8ce8ff74d4eb433e88f0b87a10cbcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a069ef780eee228f91f4c53a2bcc95d35c8ce8ff74d4eb433e88f0b87a10cbcd\": not found"
	Jul 01 23:16:06 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:16:06.825728758Z" level=error msg="ContainerStatus for \"b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6c46b43c578cf7a32b68dbbe142a34080a7ce8d919b3b25cd350be67762ece8\": not found"
	Jul 01 23:16:06 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:16:06.826197748Z" level=error msg="ContainerStatus for \"eda075780c832dfe04507c2668503acb6504ba83f9b88b95781fa0bf6b904140\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eda075780c832dfe04507c2668503acb6504ba83f9b88b95781fa0bf6b904140\": not found"
	Jul 01 23:18:00 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:18:00.742548644Z" level=info msg="shim disconnected" id=b44647d75d006da4be7e60086562915fbe16a84409b56d9fce3085c750f919d9
	Jul 01 23:18:00 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:18:00.742620665Z" level=warning msg="cleaning up after shim disconnected" id=b44647d75d006da4be7e60086562915fbe16a84409b56d9fce3085c750f919d9 namespace=k8s.io
	Jul 01 23:18:00 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:18:00.742637998Z" level=info msg="cleaning up dead shim"
	Jul 01 23:18:00 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:18:00.751933739Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:18:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3946 runtime=io.containerd.runc.v2\n"
	Jul 01 23:18:01 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:18:01.400733486Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jul 01 23:18:01 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:18:01.412470704Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"7a293987606973e87df061b7f552dc0eb5a70ea9394f2d383228f9c2d3742d5d\""
	Jul 01 23:18:01 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:18:01.412917495Z" level=info msg="StartContainer for \"7a293987606973e87df061b7f552dc0eb5a70ea9394f2d383228f9c2d3742d5d\""
	Jul 01 23:18:01 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:18:01.621277541Z" level=info msg="StartContainer for \"7a293987606973e87df061b7f552dc0eb5a70ea9394f2d383228f9c2d3742d5d\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220701225718-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220701225718-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=no-preload-20220701225718-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T23_15_07_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 23:15:03 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220701225718-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:19:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:15:17 +0000   Fri, 01 Jul 2022 23:15:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:15:17 +0000   Fri, 01 Jul 2022 23:15:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:15:17 +0000   Fri, 01 Jul 2022 23:15:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:15:17 +0000   Fri, 01 Jul 2022 23:15:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220701225718-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                82dabe3f-d133-4afb-a4d2-ee1450b85ce0
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220701225718-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 kindnet-7kwfz                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-no-preload-20220701225718-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-no-preload-20220701225718-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-5mclw                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-no-preload-20220701225718-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  Starting                 4m15s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s  kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s  kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s  kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s   node-controller  Node no-preload-20220701225718-10066 event: Registered Node no-preload-20220701225718-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [becf96e8231dc4efb269b660148e06fcc627b6ed8e784d88e605bc513ffa4068] <==
	* {"level":"info","ts":"2022-07-01T23:15:00.440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2022-07-01T23:15:00.440Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2022-07-01T23:15:00.442Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-01T23:15:00.442Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2022-07-01T23:15:00.442Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2022-07-01T23:15:00.443Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-01T23:15:00.443Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:15:01.434Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-20220701225718-10066 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T23:15:01.436Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2022-07-01T23:15:01.436Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  23:19:21 up  1:01,  0 users,  load average: 0.62, 0.58, 1.18
	Linux no-preload-20220701225718-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [ab7802906a7b09692d38717ace4669db0b1201b927c68b01400a1e45e6dae90b] <==
	* I0701 23:15:20.148856       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0701 23:15:21.536334       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.108.94.160]
	I0701 23:15:22.250335       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.105.152.52]
	I0701 23:15:22.261405       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.31.24]
	W0701 23:15:22.358445       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:15:22.358472       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:15:22.358479       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:15:22.358504       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:15:22.358607       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:15:22.359763       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:16:22.359211       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:16:22.359249       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:16:22.359257       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:16:22.360300       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:16:22.360373       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:16:22.360385       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:18:22.360399       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:18:22.360442       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:18:22.360450       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:18:22.360488       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:18:22.360556       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:18:22.361581       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0efd5173ba0612f8f1dec4c85b81dd59f286dfe8f01d588eb70b05fb32f2f7f0] <==
	* E0701 23:15:22.152792       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0701 23:15:22.152836       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0701 23:15:22.217057       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0701 23:15:22.217098       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0701 23:15:22.219984       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0701 23:15:22.220013       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0701 23:15:22.241083       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-7vh58"
	I0701 23:15:22.318644       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-bjxjm"
	I0701 23:15:22.323819       1 event.go:294] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	E0701 23:15:48.807487       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:15:49.335210       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:16:18.822334       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:16:19.349846       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:16:48.838483       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:16:49.365416       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:17:18.854802       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:17:19.380823       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:17:48.871247       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:17:49.394988       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:18:18.886856       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:18:19.410259       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:18:48.902807       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:18:49.425796       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:19:18.917063       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:19:19.441361       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b49ce69e2c58257158688cb2227d8b0dda1b0aa00833f9b3b42a772e1765f35b] <==
	* I0701 23:15:20.122786       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0701 23:15:20.122840       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0701 23:15:20.122872       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 23:15:20.145399       1 server_others.go:206] "Using iptables Proxier"
	I0701 23:15:20.145447       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 23:15:20.145461       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 23:15:20.145476       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 23:15:20.145520       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:15:20.145701       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:15:20.145952       1 server.go:661] "Version info" version="v1.24.2"
	I0701 23:15:20.145976       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 23:15:20.146765       1 config.go:317] "Starting service config controller"
	I0701 23:15:20.146789       1 config.go:226] "Starting endpoint slice config controller"
	I0701 23:15:20.146803       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 23:15:20.146812       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 23:15:20.146944       1 config.go:444] "Starting node config controller"
	I0701 23:15:20.146972       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 23:15:20.247840       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 23:15:20.247852       1 shared_informer.go:262] Caches are synced for service config
	I0701 23:15:20.247946       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [62574f07590010ac5157c1dbc72d41c9fd2a0b4834828193da39400420cee4b4] <==
	* W0701 23:15:03.741505       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 23:15:03.741853       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:15:03.741949       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:15:03.741632       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 23:15:03.742068       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 23:15:03.742249       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 23:15:03.743167       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 23:15:03.743203       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 23:15:03.743413       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 23:15:03.743445       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 23:15:03.743525       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 23:15:03.743548       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 23:15:03.743728       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 23:15:03.743767       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 23:15:04.645805       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 23:15:04.645874       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 23:15:04.709168       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 23:15:04.709203       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 23:15:04.738349       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 23:15:04.738392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 23:15:04.818092       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:15:04.818135       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 23:15:04.818318       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 23:15:04.818372       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0701 23:15:07.032640       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 23:10:29 UTC, end at Fri 2022-07-01 23:19:21 UTC. --
	Jul 01 23:17:22 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:17:22.157007    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:17:27 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:17:27.157901    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:17:32 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:17:32.158886    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:17:37 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:17:37.160432    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:17:42 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:17:42.161445    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:17:47 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:17:47.162122    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:17:52 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:17:52.163195    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:17:57 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:17:57.164418    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:01 no-preload-20220701225718-10066 kubelet[3057]: I0701 23:18:01.398299    3057 scope.go:110] "RemoveContainer" containerID="b44647d75d006da4be7e60086562915fbe16a84409b56d9fce3085c750f919d9"
	Jul 01 23:18:02 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:02.165545    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:07 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:07.167123    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:12 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:12.167847    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:17 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:17.168555    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:22 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:22.169501    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:27 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:27.170678    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:32 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:32.172145    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:37 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:37.173003    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:42 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:42.174763    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:47 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:47.175607    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:52 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:52.177015    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:18:57 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:18:57.177951    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:19:02 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:19:02.179413    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:19:07 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:19:07.180321    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:19:12 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:19:12.181271    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:19:17 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:19:17.182279    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-6jmqb metrics-server-5c6f97fb75-629bq storage-provisioner dashboard-metrics-scraper-dffd48c4c-bjxjm kubernetes-dashboard-5fd5574d9f-7vh58
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-6jmqb metrics-server-5c6f97fb75-629bq storage-provisioner dashboard-metrics-scraper-dffd48c4c-bjxjm kubernetes-dashboard-5fd5574d9f-7vh58
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-6jmqb metrics-server-5c6f97fb75-629bq storage-provisioner dashboard-metrics-scraper-dffd48c4c-bjxjm kubernetes-dashboard-5fd5574d9f-7vh58: exit status 1 (54.188243ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-6jmqb" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-629bq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-bjxjm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-7vh58" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-6jmqb metrics-server-5c6f97fb75-629bq storage-provisioner dashboard-metrics-scraper-dffd48c4c-bjxjm kubernetes-dashboard-5fd5574d9f-7vh58: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (534.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (533.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220701230032-10066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220701230032-10066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: exit status 80 (8m51.440358977s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-different-port-20220701230032-10066" ...
	* Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image kubernetesui/dashboard:v2.6.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 23:13:36.508585  275844 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:13:36.508812  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.508825  275844 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:36.508833  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.509394  275844 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:13:36.509707  275844 out.go:303] Setting JSON to false
	I0701 23:13:36.511123  275844 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3370,"bootTime":1656713847,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:13:36.511210  275844 start.go:125] virtualization: kvm guest
	I0701 23:13:36.513852  275844 out.go:177] * [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:13:36.516346  275844 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:13:36.516221  275844 notify.go:193] Checking for updates...
	I0701 23:13:36.517990  275844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:13:36.519337  275844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:36.520961  275844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:13:36.522517  275844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:13:36.524336  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:36.524783  275844 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:13:36.571678  275844 docker.go:137] docker version: linux-20.10.17
	I0701 23:13:36.571797  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.688003  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.603240517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.688097  275844 docker.go:254] overlay module found
	I0701 23:13:36.689718  275844 out.go:177] * Using the docker driver based on existing profile
	I0701 23:13:36.691073  275844 start.go:284] selected driver: docker
	I0701 23:13:36.691091  275844 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.691176  275844 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:13:36.711421  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.815393  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.741940503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.815669  275844 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:13:36.815700  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:36.815708  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:36.815734  275844 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.817973  275844 out.go:177] * Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	I0701 23:13:36.819338  275844 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:13:36.820691  275844 out.go:177] * Pulling base image ...
	I0701 23:13:36.821863  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:36.821911  275844 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:13:36.821925  275844 cache.go:57] Caching tarball of preloaded images
	I0701 23:13:36.821988  275844 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:13:36.822107  275844 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:13:36.822124  275844 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:13:36.822229  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:36.857028  275844 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:13:36.857061  275844 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:13:36.857085  275844 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:13:36.857128  275844 start.go:352] acquiring machines lock for default-k8s-different-port-20220701230032-10066: {Name:mk7518221e8259d073969ba977a5dbef99fe5935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:13:36.857241  275844 start.go:356] acquired machines lock for "default-k8s-different-port-20220701230032-10066" in 79.413µs
	I0701 23:13:36.857265  275844 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:13:36.857273  275844 fix.go:55] fixHost starting: 
	I0701 23:13:36.857565  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:36.889959  275844 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220701230032-10066: state=Stopped err=<nil>
	W0701 23:13:36.890003  275844 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:13:36.892196  275844 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220701230032-10066" ...
	I0701 23:13:36.893583  275844 cli_runner.go:164] Run: docker start default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.260876  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:37.298699  275844 kic.go:416] container "default-k8s-different-port-20220701230032-10066" state is running.
	I0701 23:13:37.299071  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.333911  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:37.334149  275844 machine.go:88] provisioning docker machine ...
	I0701 23:13:37.334173  275844 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220701230032-10066"
	I0701 23:13:37.334223  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.368604  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:37.368836  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:37.368867  275844 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220701230032-10066 && echo "default-k8s-different-port-20220701230032-10066" | sudo tee /etc/hostname
	I0701 23:13:37.369499  275844 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35278->127.0.0.1:49442: read: connection reset by peer
	I0701 23:13:40.494516  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220701230032-10066
	
	I0701 23:13:40.494611  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.527972  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:40.528160  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:40.528184  275844 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220701230032-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220701230032-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220701230032-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:13:40.641942  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:13:40.641973  275844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:13:40.642000  275844 ubuntu.go:177] setting up certificates
	I0701 23:13:40.642011  275844 provision.go:83] configureAuth start
	I0701 23:13:40.642064  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.675855  275844 provision.go:138] copyHostCerts
	I0701 23:13:40.675913  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:13:40.675927  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:13:40.675991  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:13:40.676060  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:13:40.676071  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:13:40.676098  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:13:40.676148  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:13:40.676158  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:13:40.676192  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:13:40.676235  275844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220701230032-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220701230032-10066]
	I0701 23:13:40.954393  275844 provision.go:172] copyRemoteCerts
	I0701 23:13:40.954451  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:13:40.954482  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.989611  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.073447  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:13:41.090826  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0701 23:13:41.107547  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 23:13:41.124219  275844 provision.go:86] duration metric: configureAuth took 482.194415ms
	I0701 23:13:41.124245  275844 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:13:41.124417  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:41.124431  275844 machine.go:91] provisioned docker machine in 3.790266635s
	I0701 23:13:41.124441  275844 start.go:306] post-start starting for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:13:41.124452  275844 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:13:41.124510  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:13:41.124554  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.158325  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.245657  275844 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:13:41.248516  275844 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:13:41.248538  275844 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:13:41.248546  275844 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:13:41.248551  275844 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:13:41.248559  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:13:41.248598  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:13:41.248664  275844 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:13:41.248742  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:13:41.255535  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:41.272444  275844 start.go:309] post-start completed in 147.990653ms
	I0701 23:13:41.272501  275844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:13:41.272534  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.306973  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.391227  275844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:13:41.395145  275844 fix.go:57] fixHost completed within 4.53786816s
	I0701 23:13:41.395167  275844 start.go:81] releasing machines lock for "default-k8s-different-port-20220701230032-10066", held for 4.537914302s
	I0701 23:13:41.395240  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428938  275844 ssh_runner.go:195] Run: systemctl --version
	I0701 23:13:41.428983  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428986  275844 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:13:41.429036  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.463442  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.464061  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.546236  275844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:13:41.557434  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:13:41.566944  275844 docker.go:179] disabling docker service ...
	I0701 23:13:41.566994  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:13:41.575898  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:13:41.584165  275844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:13:41.651388  275844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:13:41.723308  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:13:41.731887  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:13:41.744366  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.752324  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.760056  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.767864  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.775399  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:13:41.782555  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:13:41.794357  275844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:13:41.800246  275844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:13:41.806090  275844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:13:41.881056  275844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:13:41.950865  275844 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:13:41.950932  275844 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:13:41.955104  275844 start.go:471] Will wait 60s for crictl version
	I0701 23:13:41.955155  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:41.981690  275844 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:13:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:13:53.029041  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:53.051421  275844 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:13:53.051470  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.078982  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.109597  275844 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:13:53.110955  275844 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:13:53.143106  275844 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 23:13:53.146306  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.155228  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:53.155287  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.177026  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.177047  275844 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:13:53.177094  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.198475  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.198501  275844 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:13:53.198643  275844 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:13:53.221518  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:53.221540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:53.221552  275844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:13:53.221564  275844 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220701230032-10066 NodeName:default-k8s-different-port-20220701230032-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:13:53.221715  275844 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220701230032-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:13:53.221814  275844 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220701230032-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0701 23:13:53.221875  275844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:13:53.228898  275844 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:13:53.228952  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:13:53.235366  275844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0701 23:13:53.247371  275844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:13:53.259313  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0701 23:13:53.271530  275844 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:13:53.274142  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.282892  275844 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066 for IP: 192.168.76.2
	I0701 23:13:53.282980  275844 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:13:53.283015  275844 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:13:53.283078  275844 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key
	I0701 23:13:53.283124  275844 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25
	I0701 23:13:53.283163  275844 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key
	I0701 23:13:53.283252  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:13:53.283280  275844 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:13:53.283295  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:13:53.283320  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:13:53.283343  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:13:53.283367  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:13:53.283409  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:53.283939  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:13:53.300388  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:13:53.317215  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:13:53.333335  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0701 23:13:53.349529  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:13:53.365494  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:13:53.381103  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:13:53.396977  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:13:53.412881  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:13:53.429709  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:13:53.446017  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:13:53.461814  275844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:13:53.473437  275844 ssh_runner.go:195] Run: openssl version
	I0701 23:13:53.478032  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:13:53.484818  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487660  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487710  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.492105  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:13:53.498584  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:13:53.505448  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508315  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508365  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.512833  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:13:53.519315  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:13:53.526653  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529618  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529700  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.534593  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:13:53.541972  275844 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-1006
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:53.542071  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:13:53.542137  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:53.565066  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:53.565094  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:53.565103  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:53.565110  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:53.565115  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:53.565121  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:53.565127  275844 cri.go:87] found id: ""
	I0701 23:13:53.565155  275844 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:13:53.577099  275844 cri.go:114] JSON = null
	W0701 23:13:53.577140  275844 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:13:53.577183  275844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:13:53.583727  275844 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:13:53.583745  275844 kubeadm.go:626] restartCluster start
	I0701 23:13:53.583773  275844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:13:53.589812  275844 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.590282  275844 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220701230032-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:53.590469  275844 kubeconfig.go:127] "default-k8s-different-port-20220701230032-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:13:53.590950  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:13:53.592051  275844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:13:53.598266  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.598304  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.605628  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.806026  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.806089  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.814576  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.005749  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.005835  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.013967  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.206355  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.206416  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.215350  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.406581  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.406651  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.415525  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.605755  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.605834  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.614602  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.805813  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.805894  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.814430  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.006748  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.006824  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.015390  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.206606  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.206712  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.215161  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.406468  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.406570  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.415209  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.606590  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.606691  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.615437  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.806738  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.806828  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.815002  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.006349  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.006435  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.014726  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.205912  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.205993  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.214477  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.405750  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.405831  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.414060  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.606652  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.606715  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.615356  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.615374  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.615402  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.623156  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.623180  275844 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:13:56.623187  275844 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:13:56.623201  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:13:56.623258  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:56.649113  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:56.649133  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:56.649140  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:56.649146  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:56.649152  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:56.649158  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:56.649164  275844 cri.go:87] found id: ""
	I0701 23:13:56.649169  275844 cri.go:232] Stopping containers: [e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c]
	I0701 23:13:56.649212  275844 ssh_runner.go:195] Run: which crictl
	I0701 23:13:56.652179  275844 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c
	I0701 23:13:56.676014  275844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:13:56.685537  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:13:56.692196  275844 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 23:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul  1 23:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul  1 23:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul  1 23:00 /etc/kubernetes/scheduler.conf
	
	I0701 23:13:56.692247  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0701 23:13:56.698641  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0701 23:13:56.704856  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.711153  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.711210  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.717322  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0701 23:13:56.723423  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.723459  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:13:56.729312  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736598  275844 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736617  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:56.781688  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.445598  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.633371  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.679946  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.749368  275844 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:13:57.749432  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.318180  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.818690  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.830974  275844 api_server.go:71] duration metric: took 1.081606586s to wait for apiserver process to appear ...
	I0701 23:13:58.831001  275844 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:13:58.831034  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:13:58.831436  275844 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0701 23:13:59.331708  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:01.921615  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:01.921654  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.332201  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.336755  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.336792  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.831892  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.836248  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.836275  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:03.331795  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:03.337047  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0701 23:14:03.343503  275844 api_server.go:140] control plane version: v1.24.2
	I0701 23:14:03.343525  275844 api_server.go:130] duration metric: took 4.512518171s to wait for apiserver health ...
	I0701 23:14:03.343535  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:14:03.343540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:14:03.345598  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:14:03.347224  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:14:03.350686  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:14:03.350707  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:14:03.363866  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:14:04.295415  275844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:14:04.301798  275844 system_pods.go:59] 9 kube-system pods found
	I0701 23:14:04.301825  275844 system_pods.go:61] "coredns-6d4b75cb6d-zmnqs" [f0e0d22f-cd83-4531-8778-32070816b159] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301837  275844 system_pods.go:61] "etcd-default-k8s-different-port-20220701230032-10066" [c4b3993a-3a6c-4827-8250-b951a48b9432] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:14:04.301844  275844 system_pods.go:61] "kindnet-49h72" [bee4a070-eb2f-45af-a824-f8ebb08e21cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:14:04.301851  275844 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220701230032-10066" [2ce9acd5-e8e7-425b-bb9b-5dd480397910] Running
	I0701 23:14:04.301860  275844 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220701230032-10066" [2fec1fad-34c5-4b47-8713-8e789b816ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:14:04.301868  275844 system_pods.go:61] "kube-proxy-qg5j2" [c67a38f9-ae75-40ea-8992-85a437368c50] Running
	I0701 23:14:04.301873  275844 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220701230032-10066" [49056cd0-4107-4377-ba51-b97af35cbe72] Running
	I0701 23:14:04.301882  275844 system_pods.go:61] "metrics-server-5c6f97fb75-mkq9q" [f5b66095-14d2-4de4-9f1d-2cd5371ec0fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301890  275844 system_pods.go:61] "storage-provisioner" [6e0344bb-c7de-41f4-95d2-f30576ae036c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301898  275844 system_pods.go:74] duration metric: took 6.458628ms to wait for pod list to return data ...
	I0701 23:14:04.301907  275844 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:14:04.304305  275844 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:14:04.304330  275844 node_conditions.go:123] node cpu capacity is 8
	I0701 23:14:04.304343  275844 node_conditions.go:105] duration metric: took 2.432316ms to run NodePressure ...
	I0701 23:14:04.304363  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:14:04.434166  275844 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438097  275844 kubeadm.go:777] kubelet initialised
	I0701 23:14:04.438123  275844 kubeadm.go:778] duration metric: took 3.933976ms waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438131  275844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:04.443068  275844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	I0701 23:14:06.448162  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.448866  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:10.948772  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:12.949108  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.448393  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:17.948235  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:19.948671  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:21.948914  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:23.949013  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:28.948377  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:30.948441  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:32.948974  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:35.448453  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:37.448971  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:39.449007  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.948832  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.948861  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:46.448244  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.448469  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.448941  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.948268  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:54.948294  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:56.949272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:58.949543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:01.449627  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:03.950758  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:08.448681  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:10.448820  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:12.449226  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:14.948189  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:16.948842  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:18.949526  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:21.448543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:23.449129  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.948784  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:28.448748  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:30.948490  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:33.448177  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:35.948336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:37.948379  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:39.948560  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:41.949060  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:43.949319  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:46.449018  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:48.948340  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:51.448205  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:53.448249  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:55.448938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:57.948938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.448920  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:02.449149  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:04.449385  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:06.948721  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:09.448775  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:11.948462  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.448466  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:16.449003  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:18.948883  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:21.448510  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:23.949051  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:26.448494  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:28.448711  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:30.950336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:33.448272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.448817  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:37.449097  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.948158  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:41.948654  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:43.948719  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:46.448800  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:48.948666  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:50.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:53.448686  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:55.948675  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:58.448263  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:00.948090  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:02.948518  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:04.948735  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:06.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.448480  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:11.448536  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:13.448566  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:15.948312  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:17.948940  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:20.449080  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:22.948356  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:24.949063  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:26.949277  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.448968  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:31.948774  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:33.948802  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:36.448693  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:38.948200  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:41.449095  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:43.948596  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:46.448338  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:48.448406  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:50.449049  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:52.949418  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:55.448267  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:57.948337  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:59.949522  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:18:02.449005  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:18:04.445635  275844 pod_ready.go:81] duration metric: took 4m0.002536043s waiting for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	E0701 23:18:04.445658  275844 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:18:04.445676  275844 pod_ready.go:38] duration metric: took 4m0.00753476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:18:04.445715  275844 kubeadm.go:630] restartCluster took 4m10.861963713s
	W0701 23:18:04.445855  275844 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:18:04.445882  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:18:06.095490  275844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.649588457s)
	I0701 23:18:06.095547  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:06.104815  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:18:06.112334  275844 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:18:06.112376  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:18:06.119483  275844 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:18:06.119534  275844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:18:06.370658  275844 out.go:204]   - Generating certificates and keys ...
	I0701 23:18:07.277086  275844 out.go:204]   - Booting up control plane ...
	I0701 23:18:14.316275  275844 out.go:204]   - Configuring RBAC rules ...
	I0701 23:18:14.730162  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:18:14.730189  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:18:14.731634  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:18:14.732857  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:18:14.739597  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:18:14.739622  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:18:14.825236  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:18:15.561507  275844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:18:15.561626  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.561637  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066 minikube.k8s.io/updated_at=2022_07_01T23_18_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.568394  275844 ops.go:34] apiserver oom_adj: -16
	I0701 23:18:15.634685  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:16.190642  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:16.690023  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.190952  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.690163  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.191022  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.690054  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.190723  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.690097  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.190968  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.691032  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:21.190434  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:21.690038  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.190938  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.690621  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.190651  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.690833  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.190934  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.690962  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.190256  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.690333  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.190101  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.690887  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.190074  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.248216  275844 kubeadm.go:1045] duration metric: took 11.686670316s to wait for elevateKubeSystemPrivileges.
	I0701 23:18:27.248246  275844 kubeadm.go:397] StartCluster complete in 4m33.70628023s
	I0701 23:18:27.248264  275844 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.248355  275844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:18:27.249185  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.763199  275844 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220701230032-10066" rescaled to 1
	I0701 23:18:27.763267  275844 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:18:27.766618  275844 out.go:177] * Verifying Kubernetes components...
	I0701 23:18:27.763306  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:18:27.763330  275844 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:18:27.766747  275844 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766765  275844 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766778  275844 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:18:27.766806  275844 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766825  275844 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766828  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.763473  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:18:27.766824  275844 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768481  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:27.768504  275844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766835  275844 addons.go:162] addon dashboard should already be in state true
	I0701 23:18:27.768632  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.766843  275844 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768713  275844 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.768733  275844 addons.go:162] addon metrics-server should already be in state true
	I0701 23:18:27.768768  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.767332  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.768887  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769184  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769187  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.831262  275844 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:18:27.832550  275844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:18:27.833969  275844 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:27.833992  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:18:27.834040  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.835526  275844 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:18:27.833023  275844 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.837673  275844 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0701 23:18:27.837677  275844 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:18:27.837692  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:18:27.839084  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:18:27.839099  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:18:27.839108  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:18:27.839153  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.837723  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.839164  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.839691  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.856622  275844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:18:27.856645  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:18:27.890091  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.891200  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.895622  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.896930  275844 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:27.896946  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:18:27.896980  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.937496  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:28.136017  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:28.136703  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:28.139953  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:18:28.139977  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:18:28.144217  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:18:28.144239  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:18:28.234055  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:18:28.234083  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:18:28.318902  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:18:28.318936  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:18:28.336787  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:18:28.336818  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:18:28.423063  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.423089  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:18:28.427844  275844 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0701 23:18:28.432989  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:18:28.433019  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:18:28.442227  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.523695  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:18:28.523727  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:18:28.618333  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:18:28.618365  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:18:28.636855  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:18:28.636885  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:18:28.652952  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:18:28.652974  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:18:28.739775  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:28.739814  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:18:28.832453  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:29.251359  275844 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:29.544427  275844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0701 23:18:29.545959  275844 addons.go:414] enableAddons completed in 1.78263451s
	I0701 23:18:29.863227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:31.863254  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.363382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:36.363413  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:38.363717  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:40.863294  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:42.863457  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:45.363613  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:47.863096  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.863849  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:52.363545  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:54.363732  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:56.862624  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.863111  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:00.863425  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:03.363680  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.363957  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:07.364035  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:09.364588  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:11.863661  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.362895  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:16.363322  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:18.363478  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:20.863309  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:23.362826  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:25.363077  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:27.863010  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:29.863599  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:31.863690  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:34.363405  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:36.862844  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:39.363009  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:41.863136  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:43.863182  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:46.362920  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:48.363519  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:50.365995  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:52.863524  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:55.363287  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:57.363494  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:59.863734  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:02.362902  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:04.363417  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:06.863299  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:08.863390  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:11.363598  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:13.863329  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:16.363213  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:18.363246  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:20.862846  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:22.863412  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:25.363572  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:27.863611  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:29.863926  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:32.363408  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:34.363894  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:36.863454  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:39.363389  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:41.363918  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:43.364119  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:45.863734  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:48.363224  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:50.862933  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:52.863303  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:54.863540  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:57.363333  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:59.363619  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:01.863747  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:04.363462  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:06.863642  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:09.363229  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:11.863382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:14.363453  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:16.363483  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:18.863559  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:20.863852  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:23.363579  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:25.863700  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:27.863820  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:30.363502  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:32.365183  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:34.862977  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:36.863647  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:39.363489  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:41.862636  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:43.863818  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:46.362854  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:48.363608  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:50.863761  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:53.363511  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:55.363792  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:57.863460  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:00.363227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:02.863069  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:04.863654  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:06.863767  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:09.362775  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:11.363266  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:13.363390  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:15.863386  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:18.363719  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:20.363796  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:22.863167  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:24.863249  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:27.362843  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:27.865277  275844 node_ready.go:38] duration metric: took 4m0.008613758s waiting for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:22:27.867660  275844 out.go:177] 
	W0701 23:22:27.869191  275844 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:22:27.869208  275844 out.go:239] * 
	* 
	W0701 23:22:27.869949  275844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:22:27.871815  275844 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220701230032-10066 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220701230032-10066
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220701230032-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93",
	        "Created": "2022-07-01T23:00:40.408283404Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T23:13:37.253165486Z",
	            "FinishedAt": "2022-07-01T23:13:35.896385451Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hostname",
	        "HostsPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hosts",
	        "LogPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93-json.log",
	        "Name": "/default-k8s-different-port-20220701230032-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220701230032-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220701230032-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220701230032-10066",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220701230032-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220701230032-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a2ba47ae2c38208a3afb09c62b2914d723cce37fbff94b39953ca0016b34bc8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49439"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a2ba47ae2c38",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220701230032-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "261fd4f89726",
	                        "default-k8s-different-port-20220701230032-10066"
	                    ],
	                    "NetworkID": "08b054338871e09e9987c4187ebe43c21ee49646be113b14ac2205c8647ea77d",
	                    "EndpointID": "9c2cdcd15c5d5bebda898002f555e5c0adc6dc1d266a40af76b7e4a391cd8cc6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220701230032-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:06 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC |                     |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --preload=false                                |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:13:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:13:36.508585  275844 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:13:36.508812  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.508825  275844 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:36.508833  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.509394  275844 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:13:36.509707  275844 out.go:303] Setting JSON to false
	I0701 23:13:36.511123  275844 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3370,"bootTime":1656713847,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:13:36.511210  275844 start.go:125] virtualization: kvm guest
	I0701 23:13:36.513852  275844 out.go:177] * [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:13:36.516346  275844 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:13:36.516221  275844 notify.go:193] Checking for updates...
	I0701 23:13:36.517990  275844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:13:36.519337  275844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:36.520961  275844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:13:36.522517  275844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:13:36.524336  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:36.524783  275844 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:13:36.571678  275844 docker.go:137] docker version: linux-20.10.17
	I0701 23:13:36.571797  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.688003  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.603240517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.688097  275844 docker.go:254] overlay module found
	I0701 23:13:36.689718  275844 out.go:177] * Using the docker driver based on existing profile
	I0701 23:13:36.691073  275844 start.go:284] selected driver: docker
	I0701 23:13:36.691091  275844 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.691176  275844 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:13:36.711421  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.815393  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.741940503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.815669  275844 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:13:36.815700  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:36.815708  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:36.815734  275844 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.817973  275844 out.go:177] * Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	I0701 23:13:36.819338  275844 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:13:36.820691  275844 out.go:177] * Pulling base image ...
	I0701 23:13:36.821863  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:36.821911  275844 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:13:36.821925  275844 cache.go:57] Caching tarball of preloaded images
	I0701 23:13:36.821988  275844 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:13:36.822107  275844 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:13:36.822124  275844 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:13:36.822229  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:36.857028  275844 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:13:36.857061  275844 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:13:36.857085  275844 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:13:36.857128  275844 start.go:352] acquiring machines lock for default-k8s-different-port-20220701230032-10066: {Name:mk7518221e8259d073969ba977a5dbef99fe5935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:13:36.857241  275844 start.go:356] acquired machines lock for "default-k8s-different-port-20220701230032-10066" in 79.413µs
	I0701 23:13:36.857265  275844 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:13:36.857273  275844 fix.go:55] fixHost starting: 
	I0701 23:13:36.857565  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:36.889959  275844 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220701230032-10066: state=Stopped err=<nil>
	W0701 23:13:36.890003  275844 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:13:36.892196  275844 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220701230032-10066" ...
	I0701 23:13:34.335098  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.335670  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.893583  275844 cli_runner.go:164] Run: docker start default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.260876  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:37.298699  275844 kic.go:416] container "default-k8s-different-port-20220701230032-10066" state is running.
	I0701 23:13:37.299071  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.333911  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:37.334149  275844 machine.go:88] provisioning docker machine ...
	I0701 23:13:37.334173  275844 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220701230032-10066"
	I0701 23:13:37.334223  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.368604  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:37.368836  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:37.368867  275844 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220701230032-10066 && echo "default-k8s-different-port-20220701230032-10066" | sudo tee /etc/hostname
	I0701 23:13:37.369499  275844 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35278->127.0.0.1:49442: read: connection reset by peer
	I0701 23:13:40.494516  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220701230032-10066
	
	I0701 23:13:40.494611  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.527972  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:40.528160  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:40.528184  275844 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220701230032-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220701230032-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220701230032-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:13:40.641942  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:13:40.641973  275844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:13:40.642000  275844 ubuntu.go:177] setting up certificates
	I0701 23:13:40.642011  275844 provision.go:83] configureAuth start
	I0701 23:13:40.642064  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.675855  275844 provision.go:138] copyHostCerts
	I0701 23:13:40.675913  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:13:40.675927  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:13:40.675991  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:13:40.676060  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:13:40.676071  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:13:40.676098  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:13:40.676148  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:13:40.676158  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:13:40.676192  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:13:40.676235  275844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220701230032-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220701230032-10066]
	I0701 23:13:40.954393  275844 provision.go:172] copyRemoteCerts
	I0701 23:13:40.954451  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:13:40.954482  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.989611  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.073447  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:13:41.090826  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0701 23:13:41.107547  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 23:13:41.124219  275844 provision.go:86] duration metric: configureAuth took 482.194415ms
	I0701 23:13:41.124245  275844 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:13:41.124417  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:41.124431  275844 machine.go:91] provisioned docker machine in 3.790266635s
	I0701 23:13:41.124441  275844 start.go:306] post-start starting for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:13:41.124452  275844 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:13:41.124510  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:13:41.124554  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.158325  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.245657  275844 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:13:41.248516  275844 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:13:41.248538  275844 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:13:41.248546  275844 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:13:41.248551  275844 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:13:41.248559  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:13:41.248598  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:13:41.248664  275844 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:13:41.248742  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:13:41.255535  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:41.272444  275844 start.go:309] post-start completed in 147.990653ms
	I0701 23:13:41.272501  275844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:13:41.272534  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.306973  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.391227  275844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:13:41.395145  275844 fix.go:57] fixHost completed within 4.53786816s
	I0701 23:13:41.395167  275844 start.go:81] releasing machines lock for "default-k8s-different-port-20220701230032-10066", held for 4.537914302s
	I0701 23:13:41.395240  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428938  275844 ssh_runner.go:195] Run: systemctl --version
	I0701 23:13:41.428983  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428986  275844 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:13:41.429036  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.463442  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.464061  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:38.835336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.334767  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:43.334801  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.546236  275844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:13:41.557434  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:13:41.566944  275844 docker.go:179] disabling docker service ...
	I0701 23:13:41.566994  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:13:41.575898  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:13:41.584165  275844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:13:41.651388  275844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:13:41.723308  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:13:41.731887  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:13:41.744366  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.752324  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.760056  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.767864  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.775399  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:13:41.782555  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:13:41.794357  275844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:13:41.800246  275844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:13:41.806090  275844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:13:41.881056  275844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:13:41.950865  275844 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:13:41.950932  275844 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:13:41.955104  275844 start.go:471] Will wait 60s for crictl version
	I0701 23:13:41.955155  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:41.981690  275844 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:13:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:13:45.834614  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:47.835771  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:53.029041  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:53.051421  275844 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:13:53.051470  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.078982  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.109597  275844 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:13:50.335036  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:52.834973  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:53.110955  275844 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:13:53.143106  275844 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 23:13:53.146306  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.155228  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:53.155287  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.177026  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.177047  275844 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:13:53.177094  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.198475  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.198501  275844 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:13:53.198643  275844 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:13:53.221518  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:53.221540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:53.221552  275844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:13:53.221564  275844 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220701230032-10066 NodeName:default-k8s-different-port-20220701230032-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:13:53.221715  275844 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220701230032-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:13:53.221814  275844 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220701230032-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0701 23:13:53.221875  275844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:13:53.228898  275844 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:13:53.228952  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:13:53.235366  275844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0701 23:13:53.247371  275844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:13:53.259313  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0701 23:13:53.271530  275844 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:13:53.274142  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.282892  275844 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066 for IP: 192.168.76.2
	I0701 23:13:53.282980  275844 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:13:53.283015  275844 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:13:53.283078  275844 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key
	I0701 23:13:53.283124  275844 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25
	I0701 23:13:53.283163  275844 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key
	I0701 23:13:53.283252  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:13:53.283280  275844 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:13:53.283295  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:13:53.283320  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:13:53.283343  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:13:53.283367  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:13:53.283409  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:53.283939  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:13:53.300388  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:13:53.317215  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:13:53.333335  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0701 23:13:53.349529  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:13:53.365494  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:13:53.381103  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:13:53.396977  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:13:53.412881  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:13:53.429709  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:13:53.446017  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:13:53.461814  275844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:13:53.473437  275844 ssh_runner.go:195] Run: openssl version
	I0701 23:13:53.478032  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:13:53.484818  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487660  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487710  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.492105  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:13:53.498584  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:13:53.505448  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508315  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508365  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.512833  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:13:53.519315  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:13:53.526653  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529618  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529700  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.534593  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:13:53.541972  275844 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-1006
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:53.542071  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:13:53.542137  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:53.565066  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:53.565094  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:53.565103  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:53.565110  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:53.565115  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:53.565121  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:53.565127  275844 cri.go:87] found id: ""
	I0701 23:13:53.565155  275844 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:13:53.577099  275844 cri.go:114] JSON = null
	W0701 23:13:53.577140  275844 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:13:53.577183  275844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:13:53.583727  275844 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:13:53.583745  275844 kubeadm.go:626] restartCluster start
	I0701 23:13:53.583773  275844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:13:53.589812  275844 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.590282  275844 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220701230032-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:53.590469  275844 kubeconfig.go:127] "default-k8s-different-port-20220701230032-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:13:53.590950  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:13:53.592051  275844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:13:53.598266  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.598304  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.605628  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.806026  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.806089  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.814576  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.005749  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.005835  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.013967  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.206355  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.206416  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.215350  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.406581  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.406651  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.415525  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.605755  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.605834  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.614602  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.805813  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.805894  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.814430  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.006748  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.006824  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.015390  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.206606  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.206712  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.215161  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.406468  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.406570  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.415209  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.606590  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.606691  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.615437  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.806738  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.806828  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.815002  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.006349  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.006435  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.014726  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.205912  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.205993  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.214477  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.405750  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.405831  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.414060  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.334779  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:57.835309  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:56.606652  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.606715  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.615356  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.615374  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.615402  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.623156  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.623180  275844 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:13:56.623187  275844 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:13:56.623201  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:13:56.623258  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:56.649113  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:56.649133  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:56.649140  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:56.649146  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:56.649152  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:56.649158  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:56.649164  275844 cri.go:87] found id: ""
	I0701 23:13:56.649169  275844 cri.go:232] Stopping containers: [e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c]
	I0701 23:13:56.649212  275844 ssh_runner.go:195] Run: which crictl
	I0701 23:13:56.652179  275844 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c
	I0701 23:13:56.676014  275844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:13:56.685537  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:13:56.692196  275844 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 23:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul  1 23:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul  1 23:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul  1 23:00 /etc/kubernetes/scheduler.conf
	
	I0701 23:13:56.692247  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0701 23:13:56.698641  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0701 23:13:56.704856  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.711153  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.711210  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.717322  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0701 23:13:56.723423  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.723459  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:13:56.729312  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736598  275844 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736617  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:56.781688  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.445598  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.633371  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.679946  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.749368  275844 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:13:57.749432  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.318180  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.818690  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.830974  275844 api_server.go:71] duration metric: took 1.081606586s to wait for apiserver process to appear ...
	I0701 23:13:58.831001  275844 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:13:58.831034  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:13:58.831436  275844 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0701 23:13:59.331708  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:01.921615  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:01.921654  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.332201  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.336755  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.336792  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.831892  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.836248  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.836275  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:03.331795  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:03.337047  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0701 23:14:03.343503  275844 api_server.go:140] control plane version: v1.24.2
	I0701 23:14:03.343525  275844 api_server.go:130] duration metric: took 4.512518171s to wait for apiserver health ...
	I0701 23:14:03.343535  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:14:03.343540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:14:03.345598  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:13:59.835489  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:02.335364  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:03.347224  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:14:03.350686  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:14:03.350707  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:14:03.363866  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:14:04.295415  275844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:14:04.301798  275844 system_pods.go:59] 9 kube-system pods found
	I0701 23:14:04.301825  275844 system_pods.go:61] "coredns-6d4b75cb6d-zmnqs" [f0e0d22f-cd83-4531-8778-32070816b159] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301837  275844 system_pods.go:61] "etcd-default-k8s-different-port-20220701230032-10066" [c4b3993a-3a6c-4827-8250-b951a48b9432] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:14:04.301844  275844 system_pods.go:61] "kindnet-49h72" [bee4a070-eb2f-45af-a824-f8ebb08e21cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:14:04.301851  275844 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220701230032-10066" [2ce9acd5-e8e7-425b-bb9b-5dd480397910] Running
	I0701 23:14:04.301860  275844 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220701230032-10066" [2fec1fad-34c5-4b47-8713-8e789b816ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:14:04.301868  275844 system_pods.go:61] "kube-proxy-qg5j2" [c67a38f9-ae75-40ea-8992-85a437368c50] Running
	I0701 23:14:04.301873  275844 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220701230032-10066" [49056cd0-4107-4377-ba51-b97af35cbe72] Running
	I0701 23:14:04.301882  275844 system_pods.go:61] "metrics-server-5c6f97fb75-mkq9q" [f5b66095-14d2-4de4-9f1d-2cd5371ec0fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301890  275844 system_pods.go:61] "storage-provisioner" [6e0344bb-c7de-41f4-95d2-f30576ae036c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301898  275844 system_pods.go:74] duration metric: took 6.458628ms to wait for pod list to return data ...
	I0701 23:14:04.301907  275844 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:14:04.304305  275844 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:14:04.304330  275844 node_conditions.go:123] node cpu capacity is 8
	I0701 23:14:04.304343  275844 node_conditions.go:105] duration metric: took 2.432316ms to run NodePressure ...
	I0701 23:14:04.304363  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:14:04.434166  275844 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438097  275844 kubeadm.go:777] kubelet initialised
	I0701 23:14:04.438123  275844 kubeadm.go:778] duration metric: took 3.933976ms waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438131  275844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:04.443068  275844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	I0701 23:14:06.448162  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:04.335402  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:06.335651  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.448866  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:10.948772  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.834525  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:11.335287  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:12.949108  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.448393  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:13.834432  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.835251  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:18.334462  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:17.948235  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:19.948671  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:20.334833  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:22.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:21.948914  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:23.949013  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:24.335241  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.834599  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:28.948377  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:30.948441  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:29.334764  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:31.834659  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:32.948974  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:35.448453  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:33.835115  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:36.334527  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:37.448971  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:39.449007  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:38.834645  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.335647  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.948832  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.948861  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:46.448244  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.834536  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:45.835152  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.448469  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.448941  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.335336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.834828  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.948268  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:54.948294  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:55.334778  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:56.331712  269883 pod_ready.go:81] duration metric: took 4m0.0026135s waiting for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" ...
	E0701 23:14:56.331755  269883 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:14:56.331779  269883 pod_ready.go:38] duration metric: took 4m0.007826908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:56.331809  269883 kubeadm.go:630] restartCluster took 4m10.917993696s
	W0701 23:14:56.331941  269883 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:14:56.331974  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:14:57.984431  269883 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.65243003s)
	I0701 23:14:57.984496  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:14:57.994269  269883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:14:58.001094  269883 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:14:58.001159  269883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:14:58.007683  269883 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:14:58.007734  269883 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:14:56.949272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:58.949543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:01.449627  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:03.950758  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.936698  269883 out.go:204]   - Generating certificates and keys ...
	I0701 23:15:06.939424  269883 out.go:204]   - Booting up control plane ...
	I0701 23:15:06.941904  269883 out.go:204]   - Configuring RBAC rules ...
	I0701 23:15:06.944403  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:15:06.944429  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:15:06.945976  269883 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:15:06.947445  269883 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:15:06.951630  269883 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:15:06.951650  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:15:06.966756  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:15:07.699280  269883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:15:07.699401  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.699419  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=no-preload-20220701225718-10066 minikube.k8s.io/updated_at=2022_07_01T23_15_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.706386  269883 ops.go:34] apiserver oom_adj: -16
	I0701 23:15:07.765556  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.338006  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.448681  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:10.448820  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:08.838005  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.337996  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.837437  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.337629  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.837363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.337763  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.838075  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.338080  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.837649  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:13.337387  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.449226  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:14.948189  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:13.838035  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.337961  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.838063  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.338241  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.837500  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.337613  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.838363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.337701  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.838061  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.337742  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.838306  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.337570  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.837680  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.892044  269883 kubeadm.go:1045] duration metric: took 12.192690701s to wait for elevateKubeSystemPrivileges.
	I0701 23:15:19.892072  269883 kubeadm.go:397] StartCluster complete in 4m34.521249474s
	I0701 23:15:19.892091  269883 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:19.892193  269883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:15:19.893038  269883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:20.407163  269883 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220701225718-10066" rescaled to 1
	I0701 23:15:20.407233  269883 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:15:20.409054  269883 out.go:177] * Verifying Kubernetes components...
	I0701 23:15:20.407277  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:15:20.407307  269883 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:15:20.407455  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:15:20.410261  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:15:20.410307  269883 addons.go:65] Setting dashboard=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410316  269883 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410322  269883 addons.go:65] Setting metrics-server=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410331  269883 addons.go:153] Setting addon dashboard=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.410333  269883 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220701225718-10066"
	W0701 23:15:20.410339  269883 addons.go:162] addon dashboard should already be in state true
	I0701 23:15:20.410339  269883 addons.go:153] Setting addon metrics-server=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410348  269883 addons.go:162] addon metrics-server should already be in state true
	I0701 23:15:20.410378  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410384  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410308  269883 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410415  269883 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410428  269883 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:15:20.410464  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410690  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410883  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410898  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410944  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.462647  269883 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.462859  269883 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.464095  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:15:20.464150  269883 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:15:20.464162  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:15:20.464109  269883 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0701 23:15:20.464170  269883 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:15:20.465490  269883 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.466852  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:15:20.466866  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:15:20.465507  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.466910  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:16.948842  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:18.949526  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:21.448543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:20.468347  269883 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.468364  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:15:20.468412  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.465559  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.467550  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.497855  269883 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:15:20.497910  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:15:20.515144  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.520029  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.522289  269883 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.522310  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:15:20.522357  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.524783  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.568239  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.635327  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.635528  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:15:20.635546  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:15:20.635773  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:15:20.635792  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:15:20.720153  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:15:20.720184  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:15:20.720330  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:15:20.720356  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:15:20.735914  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:15:20.735942  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:15:20.738036  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.738058  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:15:20.751468  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:15:20.751494  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:15:20.751989  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.830998  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:15:20.831029  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:15:20.835184  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.919071  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:15:20.919097  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:15:20.931803  269883 start.go:809] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0701 23:15:20.938634  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:15:20.938663  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:15:21.027932  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:15:21.027961  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:15:21.120018  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.120044  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:15:21.139289  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.542831  269883 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220701225718-10066"
	I0701 23:15:22.318204  269883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.178852341s)
	I0701 23:15:22.320260  269883 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0701 23:15:22.321764  269883 addons.go:414] enableAddons completed in 1.914474598s
	I0701 23:15:22.506049  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:23.449129  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.948784  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.003072  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:27.003942  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:28.448748  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:30.948490  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:29.503567  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:31.503801  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:33.448177  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:35.948336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:33.504159  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:35.504602  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:38.003422  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:37.948379  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:39.948560  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:40.504288  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:42.504480  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:41.949060  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:43.949319  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:46.449018  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:44.504514  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:47.002872  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:48.948340  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:51.448205  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:49.003639  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:51.503660  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:53.448249  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:55.448938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:53.503915  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:56.003212  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:58.003807  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:57.948938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.448920  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.504360  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:03.003336  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:02.449149  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:04.449385  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:05.503324  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:07.503773  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:06.948721  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:09.448775  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:10.003039  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:12.003124  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:11.948462  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.448466  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:16.449003  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.504207  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:17.003682  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:18.948883  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:21.448510  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:19.503321  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:21.503670  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:23.949051  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:26.448494  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:23.504169  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:26.003440  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:28.448711  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:30.950336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:28.503980  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:31.003131  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.003828  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.448272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.448817  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.503530  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.503721  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.449097  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.948158  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.504219  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:42.002779  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:41.948654  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:43.948719  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:46.448800  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:44.003891  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:46.503378  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:48.948666  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:50.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:48.503897  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:51.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:53.448686  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:55.948675  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:53.504221  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:56.003927  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:58.448263  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:00.948090  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:58.503637  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:00.503665  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.504224  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.948518  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:04.948735  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:05.003494  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:07.503949  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:06.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.448480  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:11.448536  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:12.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:13.448566  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:15.948312  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:14.004090  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:16.503717  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:17.948940  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:20.449080  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:18.504348  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:21.002849  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:23.003827  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:22.948356  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:24.949063  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:25.503280  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:27.503458  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:26.949277  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.448968  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.503895  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:32.003296  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:31.948774  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:33.948802  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:36.448693  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:34.003684  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:36.504246  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:38.948200  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:41.449095  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:39.003597  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:41.504297  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:43.948596  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:46.448338  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:44.003653  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:46.003704  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:48.448406  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:50.449049  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:48.503830  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:51.002929  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:52.949418  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:55.448267  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:53.503901  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:56.003435  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:57.948337  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:59.949522  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:58.503409  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:00.504015  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.449005  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:18:04.445635  275844 pod_ready.go:81] duration metric: took 4m0.002536043s waiting for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	E0701 23:18:04.445658  275844 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:18:04.445676  275844 pod_ready.go:38] duration metric: took 4m0.00753476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:18:04.445715  275844 kubeadm.go:630] restartCluster took 4m10.861963713s
	W0701 23:18:04.445855  275844 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:18:04.445882  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:18:06.095490  275844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.649588457s)
	I0701 23:18:06.095547  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:06.104815  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:18:06.112334  275844 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:18:06.112376  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:18:06.119483  275844 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:18:06.119534  275844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:18:06.370658  275844 out.go:204]   - Generating certificates and keys ...
	I0701 23:18:05.003477  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.003973  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.277086  275844 out.go:204]   - Booting up control plane ...
	I0701 23:18:09.503332  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:11.504503  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:14.316275  275844 out.go:204]   - Configuring RBAC rules ...
	I0701 23:18:14.730162  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:18:14.730189  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:18:14.731634  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:18:14.732857  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:18:14.739597  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:18:14.739622  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:18:14.825236  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:18:15.561507  275844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:18:15.561626  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.561637  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066 minikube.k8s.io/updated_at=2022_07_01T23_18_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.568394  275844 ops.go:34] apiserver oom_adj: -16
	I0701 23:18:15.634685  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:16.190642  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:14.002820  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.003780  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.690023  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.190952  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.690163  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.191022  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.690054  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.190723  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.690097  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.190968  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.691032  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:21.190434  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.503619  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:20.504289  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:23.003341  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:21.690038  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.190938  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.690621  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.190651  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.690833  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.190934  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.690962  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.190256  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.690333  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.190101  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.690887  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.190074  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.248216  275844 kubeadm.go:1045] duration metric: took 11.686670316s to wait for elevateKubeSystemPrivileges.
	I0701 23:18:27.248246  275844 kubeadm.go:397] StartCluster complete in 4m33.70628023s
	I0701 23:18:27.248264  275844 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.248355  275844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:18:27.249185  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.763199  275844 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220701230032-10066" rescaled to 1
	I0701 23:18:27.763267  275844 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:18:27.766618  275844 out.go:177] * Verifying Kubernetes components...
	I0701 23:18:27.763306  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:18:27.763330  275844 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:18:27.766747  275844 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766765  275844 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766778  275844 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:18:27.766806  275844 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766825  275844 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766828  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.763473  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:18:27.766824  275844 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768481  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:27.768504  275844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766835  275844 addons.go:162] addon dashboard should already be in state true
	I0701 23:18:27.768632  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.766843  275844 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768713  275844 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.768733  275844 addons.go:162] addon metrics-server should already be in state true
	I0701 23:18:27.768768  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.767332  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.768887  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769184  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769187  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.831262  275844 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:18:27.832550  275844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:18:27.833969  275844 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:27.833992  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:18:27.834040  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.835526  275844 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:18:27.833023  275844 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.837673  275844 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0701 23:18:27.837677  275844 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:18:25.003796  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.504253  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.837692  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:18:27.839084  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:18:27.839099  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:18:27.839108  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:18:27.839153  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.837723  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.839164  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.839691  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.856622  275844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:18:27.856645  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:18:27.890091  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.891200  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.895622  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.896930  275844 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:27.896946  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:18:27.896980  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.937496  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:28.136017  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:28.136703  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:28.139953  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:18:28.139977  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:18:28.144217  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:18:28.144239  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:18:28.234055  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:18:28.234083  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:18:28.318902  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:18:28.318936  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:18:28.336787  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:18:28.336818  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:18:28.423063  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.423089  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:18:28.427844  275844 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0701 23:18:28.432989  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:18:28.433019  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:18:28.442227  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.523695  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:18:28.523727  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:18:28.618333  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:18:28.618365  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:18:28.636855  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:18:28.636885  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:18:28.652952  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:18:28.652974  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:18:28.739775  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:28.739814  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:18:28.832453  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:29.251359  275844 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:29.544427  275844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0701 23:18:29.545959  275844 addons.go:414] enableAddons completed in 1.78263451s
	I0701 23:18:29.863227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:30.003794  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:32.503813  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:31.863254  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.363382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:36.363413  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.504191  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:37.003581  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:38.363717  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:40.863294  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:39.504225  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.003356  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.863457  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:45.363613  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:44.003625  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:46.504247  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:47.863096  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.863849  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.003291  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:51.003453  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:52.363545  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:54.363732  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:53.504320  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.003487  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.862624  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.863111  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:00.863425  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.504264  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:00.504489  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.003398  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.363680  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.363957  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.004021  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.503771  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.364035  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:09.364588  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:10.003129  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:12.003382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:11.863661  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.362895  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:16.363322  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.504382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:17.003939  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:19.503019  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:20.505831  269883 node_ready.go:38] duration metric: took 4m0.007935364s waiting for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:19:20.507971  269883 out.go:177] 
	W0701 23:19:20.509514  269883 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:19:20.509536  269883 out.go:239] * 
	W0701 23:19:20.510312  269883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:19:20.511951  269883 out.go:177] 
	I0701 23:19:18.363478  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:20.863309  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:23.362826  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:25.363077  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:27.863010  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:29.863599  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:31.863690  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:34.363405  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:36.862844  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:39.363009  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:41.863136  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:43.863182  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:46.362920  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:48.363519  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:50.365995  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:52.863524  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:55.363287  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:57.363494  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:59.863734  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:02.362902  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:04.363417  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:06.863299  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:08.863390  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:11.363598  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:13.863329  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:16.363213  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:18.363246  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:20.862846  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:22.863412  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:25.363572  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:27.863611  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:29.863926  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:32.363408  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:34.363894  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:36.863454  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:39.363389  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:41.363918  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:43.364119  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:45.863734  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:48.363224  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:50.862933  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:52.863303  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:54.863540  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:57.363333  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:59.363619  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:01.863747  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:04.363462  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:06.863642  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:09.363229  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:11.863382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:14.363453  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:16.363483  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:18.863559  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:20.863852  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:23.363579  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:25.863700  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:27.863820  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:30.363502  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:32.365183  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:34.862977  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:36.863647  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:39.363489  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:41.862636  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:43.863818  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:46.362854  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:48.363608  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:50.863761  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:53.363511  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:55.363792  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:57.863460  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:00.363227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:02.863069  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:04.863654  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:06.863767  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:09.362775  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:11.363266  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:13.363390  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:15.863386  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:18.363719  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:20.363796  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:22.863167  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:24.863249  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:27.362843  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:27.865277  275844 node_ready.go:38] duration metric: took 4m0.008613758s waiting for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:22:27.867660  275844 out.go:177] 
	W0701 23:22:27.869191  275844 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:22:27.869208  275844 out.go:239] * 
	W0701 23:22:27.869949  275844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:22:27.871815  275844 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	2fc852ec2cff3       6fb66cd78abfe       About a minute ago   Running             kindnet-cni               1                   0870012c7fdbd
	3ad6a11b29506       6fb66cd78abfe       4 minutes ago        Exited              kindnet-cni               0                   0870012c7fdbd
	eb11e24e69335       a634548d10b03       4 minutes ago        Running             kube-proxy                0                   8d6d8a26d2a9a
	4853e6fab716f       5d725196c1f47       4 minutes ago        Running             kube-scheduler            2                   c9495842f595f
	30f4a41daa330       aebe758cef4cd       4 minutes ago        Running             etcd                      2                   9416e3f200057
	63cbe08c42192       34cdf99b1bb3b       4 minutes ago        Running             kube-controller-manager   2                   f23084fa93a83
	dfbb7ffbbb3d0       d3377ffb7177c       4 minutes ago        Running             kube-apiserver            2                   7db62a183733b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 23:13:37 UTC, end at Fri 2022-07-01 23:22:28 UTC. --
	Jul 01 23:18:27 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:27.804985238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 23:18:27 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:27.805001457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 23:18:27 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:27.805372153Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d6d8a26d2a9aeb37ace325568535ea8b75015a0b89d4c9ff3e3fb1547b6323c pid=3463 runtime=io.containerd.runc.v2
	Jul 01 23:18:27 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:27.867558301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29k5c,Uid:4b4c82af-3672-4251-b6f5-92394f51d90f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d6d8a26d2a9aeb37ace325568535ea8b75015a0b89d4c9ff3e3fb1547b6323c\""
	Jul 01 23:18:27 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:27.871682215Z" level=info msg="CreateContainer within sandbox \"8d6d8a26d2a9aeb37ace325568535ea8b75015a0b89d4c9ff3e3fb1547b6323c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jul 01 23:18:27 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:27.938247641Z" level=info msg="CreateContainer within sandbox \"8d6d8a26d2a9aeb37ace325568535ea8b75015a0b89d4c9ff3e3fb1547b6323c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb11e24e693352d61f216279f67ca374a059eabd7c14e7969d0c8e9b21761c31\""
	Jul 01 23:18:27 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:27.939090768Z" level=info msg="StartContainer for \"eb11e24e693352d61f216279f67ca374a059eabd7c14e7969d0c8e9b21761c31\""
	Jul 01 23:18:28 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:28.218626390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-g8hks,Uid:14b30c49-6a5a-4bb2-8b30-0731e8fc2a23,Namespace:kube-system,Attempt:0,} returns sandbox id \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\""
	Jul 01 23:18:28 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:28.222729250Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jul 01 23:18:28 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:28.238499212Z" level=info msg="StartContainer for \"eb11e24e693352d61f216279f67ca374a059eabd7c14e7969d0c8e9b21761c31\" returns successfully"
	Jul 01 23:18:28 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:28.324494696Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"3ad6a11b29506a0cba58bb522457def9f974a3db06349f420ab56bfe697fe78c\""
	Jul 01 23:18:28 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:28.326201707Z" level=info msg="StartContainer for \"3ad6a11b29506a0cba58bb522457def9f974a3db06349f420ab56bfe697fe78c\""
	Jul 01 23:18:28 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:18:28.723160093Z" level=info msg="StartContainer for \"3ad6a11b29506a0cba58bb522457def9f974a3db06349f420ab56bfe697fe78c\" returns successfully"
	Jul 01 23:19:14 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:19:14.625524413Z" level=error msg="ContainerStatus for \"8cad18100ddfb605ad7b1dc3defb42d54fced756304b26193a83e0c51a151f0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cad18100ddfb605ad7b1dc3defb42d54fced756304b26193a83e0c51a151f0f\": not found"
	Jul 01 23:19:14 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:19:14.626107242Z" level=error msg="ContainerStatus for \"d03b8ec1f2fc4764df6494a43a3fd2a11ed527553bd6429e1b3b0decb037a151\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d03b8ec1f2fc4764df6494a43a3fd2a11ed527553bd6429e1b3b0decb037a151\": not found"
	Jul 01 23:19:14 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:19:14.626665968Z" level=error msg="ContainerStatus for \"a664ded395dc6bb92b46a0e06c5d305baa50749f41ee031cfeb319016251d888\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a664ded395dc6bb92b46a0e06c5d305baa50749f41ee031cfeb319016251d888\": not found"
	Jul 01 23:19:14 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:19:14.627259887Z" level=error msg="ContainerStatus for \"b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7\": not found"
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:21:09.166740504Z" level=info msg="shim disconnected" id=3ad6a11b29506a0cba58bb522457def9f974a3db06349f420ab56bfe697fe78c
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:21:09.166804795Z" level=warning msg="cleaning up after shim disconnected" id=3ad6a11b29506a0cba58bb522457def9f974a3db06349f420ab56bfe697fe78c namespace=k8s.io
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:21:09.166826429Z" level=info msg="cleaning up dead shim"
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:21:09.176570457Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:21:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3938 runtime=io.containerd.runc.v2\n"
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:21:09.210213097Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:21:09.222785855Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"2fc852ec2cff3fd96bd143fafa811951fd218e7ab804c77f601ebd1ef3d80cb4\""
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:21:09.223430505Z" level=info msg="StartContainer for \"2fc852ec2cff3fd96bd143fafa811951fd218e7ab804c77f601ebd1ef3d80cb4\""
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:21:09.335037787Z" level=info msg="StartContainer for \"2fc852ec2cff3fd96bd143fafa811951fd218e7ab804c77f601ebd1ef3d80cb4\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220701230032-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220701230032-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T23_18_15_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 23:18:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220701230032-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:22:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:18:24 +0000   Fri, 01 Jul 2022 23:18:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:18:24 +0000   Fri, 01 Jul 2022 23:18:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:18:24 +0000   Fri, 01 Jul 2022 23:18:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:18:24 +0000   Fri, 01 Jul 2022 23:18:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220701230032-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                674fca36-2ebb-426c-b65b-bd78bdb510f5
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220701230032-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 kindnet-g8hks                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220701230032-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220701230032-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-29k5c                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220701230032-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m     kube-proxy       
	  Normal  Starting                 4m15s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s  kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s  kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s  kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s   node-controller  Node default-k8s-different-port-20220701230032-10066 event: Registered Node default-k8s-different-port-20220701230032-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [30f4a41daa330161680cb83349451cab0de63a9e9ca0a9556f6b8d8b46ab9366] <==
	* {"level":"info","ts":"2022-07-01T23:18:08.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-01T23:18:08.446Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-01T23:18:08.447Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-01T23:18:08.448Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-01T23:18:08.448Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-01T23:18:08.448Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-01T23:18:08.448Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220701230032-10066 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:18:08.839Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-01T23:18:08.839Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  23:22:29 up  1:05,  0 users,  load average: 0.17, 0.37, 0.98
	Linux default-k8s-different-port-20220701230032-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [dfbb7ffbbb3d01ec5c655c33be14c60a9a6f2957fe3162cd96a537536351e36c] <==
	* I0701 23:18:28.541615       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0701 23:18:29.241634       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.11.55]
	I0701 23:18:29.522675       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.94.101]
	I0701 23:18:29.537184       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.58.108]
	W0701 23:18:30.141647       1 handler_proxy.go:102] no RequestInfo found in the context
	W0701 23:18:30.141706       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:18:30.141723       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:18:30.141738       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0701 23:18:30.141748       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:18:30.142884       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:19:30.142408       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:19:30.142475       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:19:30.142486       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:19:30.143531       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:19:30.143567       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:19:30.143574       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:21:30.142851       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:21:30.142927       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:21:30.142939       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:21:30.144000       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:21:30.144036       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:21:30.144044       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [63cbe08c42192f8a68dc4ad6bf2d9244cfb450753c00dd820632e96f48873cdf] <==
	* E0701 23:18:29.430299       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0701 23:18:29.431453       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0701 23:18:29.431517       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0701 23:18:29.433335       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0701 23:18:29.433342       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0701 23:18:29.438148       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0701 23:18:29.438152       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0701 23:18:29.458004       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-lnpcv"
	I0701 23:18:29.517778       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-7klw9"
	E0701 23:18:56.748669       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:18:57.167795       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:19:26.763367       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:19:27.182118       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:19:56.776885       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:19:57.196151       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:20:26.796106       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:20:27.212559       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:20:56.812469       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:20:57.229885       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:21:26.827751       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:21:27.243771       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:21:56.843080       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:21:57.258066       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:22:26.857186       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:22:27.272410       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [eb11e24e693352d61f216279f67ca374a059eabd7c14e7969d0c8e9b21761c31] <==
	* I0701 23:18:28.434637       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0701 23:18:28.434719       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0701 23:18:28.434760       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 23:18:28.538003       1 server_others.go:206] "Using iptables Proxier"
	I0701 23:18:28.538045       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 23:18:28.538060       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 23:18:28.538084       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 23:18:28.538120       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:18:28.538295       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:18:28.538531       1 server.go:661] "Version info" version="v1.24.2"
	I0701 23:18:28.538597       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 23:18:28.539177       1 config.go:317] "Starting service config controller"
	I0701 23:18:28.539211       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 23:18:28.539221       1 config.go:226] "Starting endpoint slice config controller"
	I0701 23:18:28.539229       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 23:18:28.539329       1 config.go:444] "Starting node config controller"
	I0701 23:18:28.539355       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 23:18:28.639445       1 shared_informer.go:262] Caches are synced for node config
	I0701 23:18:28.639448       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 23:18:28.639497       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [4853e6fab716f8af39f331aabb0fc7d89198fa1cc48add3023586165da7b294e] <==
	* E0701 23:18:11.629897       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 23:18:11.629954       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 23:18:11.629929       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 23:18:11.629977       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 23:18:11.630053       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 23:18:11.630075       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 23:18:11.630268       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 23:18:11.630288       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 23:18:11.630350       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:18:11.630391       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:18:12.451805       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 23:18:12.451837       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 23:18:12.464971       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 23:18:12.465005       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 23:18:12.531235       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 23:18:12.531262       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 23:18:12.549388       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:18:12.549421       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:18:12.617871       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 23:18:12.617906       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 23:18:12.623982       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 23:18:12.624020       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 23:18:12.770200       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 23:18:12.770239       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0701 23:18:14.927748       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 23:13:37 UTC, end at Fri 2022-07-01 23:22:29 UTC. --
	Jul 01 23:20:29 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:20:29.976430    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:20:34 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:20:34.977363    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:20:39 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:20:39.979000    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:20:44 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:20:44.979802    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:20:49 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:20:49.980643    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:20:54 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:20:54.981972    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:20:59 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:20:59.983392    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:04 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:04.985091    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 kubelet[3059]: I0701 23:21:09.207852    3059 scope.go:110] "RemoveContainer" containerID="3ad6a11b29506a0cba58bb522457def9f974a3db06349f420ab56bfe697fe78c"
	Jul 01 23:21:09 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:09.986800    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:14 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:14.988096    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:19 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:19.988860    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:24 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:24.989993    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:29 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:29.990912    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:34 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:34.992253    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:39 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:39.993160    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:44 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:44.994344    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:49 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:49.996018    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:54 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:54.997207    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:21:59 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:21:59.998985    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:22:05 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:22:05.000475    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:22:10 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:22:10.002084    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:22:15 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:22:15.002710    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:22:20 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:22:20.003334    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:22:25 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:22:25.004208    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-wfcgh metrics-server-5c6f97fb75-k9568 storage-provisioner dashboard-metrics-scraper-dffd48c4c-lnpcv kubernetes-dashboard-5fd5574d9f-7klw9
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-wfcgh metrics-server-5c6f97fb75-k9568 storage-provisioner dashboard-metrics-scraper-dffd48c4c-lnpcv kubernetes-dashboard-5fd5574d9f-7klw9
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-wfcgh metrics-server-5c6f97fb75-k9568 storage-provisioner dashboard-metrics-scraper-dffd48c4c-lnpcv kubernetes-dashboard-5fd5574d9f-7klw9: exit status 1 (54.288885ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-wfcgh" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-k9568" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-lnpcv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-7klw9" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-wfcgh metrics-server-5c6f97fb75-k9568 storage-provisioner dashboard-metrics-scraper-dffd48c4c-lnpcv kubernetes-dashboard-5fd5574d9f-7klw9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (533.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-7vh58" [5fca74b2-ed5f-40fd-8ad1-a8574aac961d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0701 23:19:34.966318   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:20:13.509480   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 23:20:17.548373   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 23:20:22.601742   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/client.crt: no such file or directory
E0701 23:20:34.503333   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 23:20:43.467494   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:20:51.855424   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:21:42.423004   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:21:47.696985   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:22:00.872863   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0701 23:27:32.035087   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0701 23:27:38.757132   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0701 23:28:11.918291   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-07-01 23:28:22.874978693 +0000 UTC m=+3892.179916217
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 describe po kubernetes-dashboard-5fd5574d9f-7vh58 -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context no-preload-20220701225718-10066 describe po kubernetes-dashboard-5fd5574d9f-7vh58 -n kubernetes-dashboard: context deadline exceeded (1.077µs)
start_stop_delete_test.go:274: kubectl --context no-preload-20220701225718-10066 describe po kubernetes-dashboard-5fd5574d9f-7vh58 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 logs kubernetes-dashboard-5fd5574d9f-7vh58 -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context no-preload-20220701225718-10066 logs kubernetes-dashboard-5fd5574d9f-7vh58 -n kubernetes-dashboard: context deadline exceeded (137ns)
start_stop_delete_test.go:274: kubectl --context no-preload-20220701225718-10066 logs kubernetes-dashboard-5fd5574d9f-7vh58 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220701225718-10066
helpers_test.go:235: (dbg) docker inspect no-preload-20220701225718-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff",
	        "Created": "2022-07-01T22:57:20.298940328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T23:10:29.171406497Z",
	            "FinishedAt": "2022-07-01T23:10:27.869046021Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hostname",
	        "HostsPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/hosts",
	        "LogPath": "/var/lib/docker/containers/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff/6714999bf30316177bd7199e9c9cf9f418c80722881a96cb9b26d1061e0f0eff-json.log",
	        "Name": "/no-preload-20220701225718-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220701225718-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220701225718-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8e3e2d6947308dcda48dbc22fb554a071cfe01234e052dce72729a0c26066f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220701225718-10066",
	                "Source": "/var/lib/docker/volumes/no-preload-20220701225718-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220701225718-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "name.minikube.sigs.k8s.io": "no-preload-20220701225718-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ace33098aa0f86a5e7c360e6ec28bc842985cefecf875d3cd83a6f829c7d2d7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4ace33098aa0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220701225718-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6714999bf303",
	                        "no-preload-20220701225718-10066"
	                    ],
	                    "NetworkID": "1edec7b6219d6237636ff26267a26187f0ef2e748e4635b07760f0d37cc8596c",
	                    "EndpointID": "115f09d6b4a01169f14b8656811420109ca1c74fd1bdac734e6008c69c7cb092",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220701225718-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:06 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC |                     |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --preload=false                                |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:13:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:13:36.508585  275844 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:13:36.508812  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.508825  275844 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:36.508833  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.509394  275844 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:13:36.509707  275844 out.go:303] Setting JSON to false
	I0701 23:13:36.511123  275844 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3370,"bootTime":1656713847,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:13:36.511210  275844 start.go:125] virtualization: kvm guest
	I0701 23:13:36.513852  275844 out.go:177] * [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:13:36.516346  275844 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:13:36.516221  275844 notify.go:193] Checking for updates...
	I0701 23:13:36.517990  275844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:13:36.519337  275844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:36.520961  275844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:13:36.522517  275844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:13:36.524336  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:36.524783  275844 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:13:36.571678  275844 docker.go:137] docker version: linux-20.10.17
	I0701 23:13:36.571797  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.688003  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.603240517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.688097  275844 docker.go:254] overlay module found
	I0701 23:13:36.689718  275844 out.go:177] * Using the docker driver based on existing profile
	I0701 23:13:36.691073  275844 start.go:284] selected driver: docker
	I0701 23:13:36.691091  275844 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.691176  275844 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:13:36.711421  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.815393  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.741940503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.815669  275844 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:13:36.815700  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:36.815708  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:36.815734  275844 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.817973  275844 out.go:177] * Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	I0701 23:13:36.819338  275844 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:13:36.820691  275844 out.go:177] * Pulling base image ...
	I0701 23:13:36.821863  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:36.821911  275844 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:13:36.821925  275844 cache.go:57] Caching tarball of preloaded images
	I0701 23:13:36.821988  275844 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:13:36.822107  275844 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:13:36.822124  275844 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:13:36.822229  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:36.857028  275844 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:13:36.857061  275844 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:13:36.857085  275844 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:13:36.857128  275844 start.go:352] acquiring machines lock for default-k8s-different-port-20220701230032-10066: {Name:mk7518221e8259d073969ba977a5dbef99fe5935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:13:36.857241  275844 start.go:356] acquired machines lock for "default-k8s-different-port-20220701230032-10066" in 79.413µs
	I0701 23:13:36.857265  275844 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:13:36.857273  275844 fix.go:55] fixHost starting: 
	I0701 23:13:36.857565  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:36.889959  275844 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220701230032-10066: state=Stopped err=<nil>
	W0701 23:13:36.890003  275844 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:13:36.892196  275844 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220701230032-10066" ...
	I0701 23:13:34.335098  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.335670  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.893583  275844 cli_runner.go:164] Run: docker start default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.260876  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:37.298699  275844 kic.go:416] container "default-k8s-different-port-20220701230032-10066" state is running.
	I0701 23:13:37.299071  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.333911  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:37.334149  275844 machine.go:88] provisioning docker machine ...
	I0701 23:13:37.334173  275844 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220701230032-10066"
	I0701 23:13:37.334223  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.368604  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:37.368836  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:37.368867  275844 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220701230032-10066 && echo "default-k8s-different-port-20220701230032-10066" | sudo tee /etc/hostname
	I0701 23:13:37.369499  275844 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35278->127.0.0.1:49442: read: connection reset by peer
	I0701 23:13:40.494516  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220701230032-10066
	
	I0701 23:13:40.494611  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.527972  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:40.528160  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:40.528184  275844 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220701230032-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220701230032-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220701230032-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:13:40.641942  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:13:40.641973  275844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:13:40.642000  275844 ubuntu.go:177] setting up certificates
	I0701 23:13:40.642011  275844 provision.go:83] configureAuth start
	I0701 23:13:40.642064  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.675855  275844 provision.go:138] copyHostCerts
	I0701 23:13:40.675913  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:13:40.675927  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:13:40.675991  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:13:40.676060  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:13:40.676071  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:13:40.676098  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:13:40.676148  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:13:40.676158  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:13:40.676192  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:13:40.676235  275844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220701230032-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220701230032-10066]
	I0701 23:13:40.954393  275844 provision.go:172] copyRemoteCerts
	I0701 23:13:40.954451  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:13:40.954482  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.989611  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.073447  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:13:41.090826  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0701 23:13:41.107547  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 23:13:41.124219  275844 provision.go:86] duration metric: configureAuth took 482.194415ms
	I0701 23:13:41.124245  275844 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:13:41.124417  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:41.124431  275844 machine.go:91] provisioned docker machine in 3.790266635s
	I0701 23:13:41.124441  275844 start.go:306] post-start starting for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:13:41.124452  275844 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:13:41.124510  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:13:41.124554  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.158325  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.245657  275844 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:13:41.248516  275844 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:13:41.248538  275844 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:13:41.248546  275844 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:13:41.248551  275844 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:13:41.248559  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:13:41.248598  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:13:41.248664  275844 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:13:41.248742  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:13:41.255535  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:41.272444  275844 start.go:309] post-start completed in 147.990653ms
	I0701 23:13:41.272501  275844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:13:41.272534  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.306973  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.391227  275844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:13:41.395145  275844 fix.go:57] fixHost completed within 4.53786816s
	I0701 23:13:41.395167  275844 start.go:81] releasing machines lock for "default-k8s-different-port-20220701230032-10066", held for 4.537914302s
	I0701 23:13:41.395240  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428938  275844 ssh_runner.go:195] Run: systemctl --version
	I0701 23:13:41.428983  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428986  275844 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:13:41.429036  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.463442  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.464061  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:38.835336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.334767  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:43.334801  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.546236  275844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:13:41.557434  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:13:41.566944  275844 docker.go:179] disabling docker service ...
	I0701 23:13:41.566994  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:13:41.575898  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:13:41.584165  275844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:13:41.651388  275844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:13:41.723308  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:13:41.731887  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:13:41.744366  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.752324  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.760056  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.767864  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.775399  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:13:41.782555  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:13:41.794357  275844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:13:41.800246  275844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:13:41.806090  275844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:13:41.881056  275844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:13:41.950865  275844 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:13:41.950932  275844 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:13:41.955104  275844 start.go:471] Will wait 60s for crictl version
	I0701 23:13:41.955155  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:41.981690  275844 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:13:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:13:45.834614  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:47.835771  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:53.029041  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:53.051421  275844 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:13:53.051470  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.078982  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.109597  275844 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:13:50.335036  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:52.834973  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:53.110955  275844 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:13:53.143106  275844 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 23:13:53.146306  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.155228  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:53.155287  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.177026  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.177047  275844 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:13:53.177094  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.198475  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.198501  275844 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:13:53.198643  275844 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:13:53.221518  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:53.221540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:53.221552  275844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:13:53.221564  275844 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220701230032-10066 NodeName:default-k8s-different-port-20220701230032-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:13:53.221715  275844 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220701230032-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:13:53.221814  275844 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220701230032-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0701 23:13:53.221875  275844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:13:53.228898  275844 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:13:53.228952  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:13:53.235366  275844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0701 23:13:53.247371  275844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:13:53.259313  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0701 23:13:53.271530  275844 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:13:53.274142  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.282892  275844 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066 for IP: 192.168.76.2
	I0701 23:13:53.282980  275844 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:13:53.283015  275844 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:13:53.283078  275844 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key
	I0701 23:13:53.283124  275844 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25
	I0701 23:13:53.283163  275844 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key
	I0701 23:13:53.283252  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:13:53.283280  275844 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:13:53.283295  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:13:53.283320  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:13:53.283343  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:13:53.283367  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:13:53.283409  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:53.283939  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:13:53.300388  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:13:53.317215  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:13:53.333335  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0701 23:13:53.349529  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:13:53.365494  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:13:53.381103  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:13:53.396977  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:13:53.412881  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:13:53.429709  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:13:53.446017  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:13:53.461814  275844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:13:53.473437  275844 ssh_runner.go:195] Run: openssl version
	I0701 23:13:53.478032  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:13:53.484818  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487660  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487710  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.492105  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:13:53.498584  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:13:53.505448  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508315  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508365  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.512833  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:13:53.519315  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:13:53.526653  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529618  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529700  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.534593  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:13:53.541972  275844 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-1006
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:53.542071  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:13:53.542137  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:53.565066  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:53.565094  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:53.565103  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:53.565110  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:53.565115  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:53.565121  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:53.565127  275844 cri.go:87] found id: ""
	I0701 23:13:53.565155  275844 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:13:53.577099  275844 cri.go:114] JSON = null
	W0701 23:13:53.577140  275844 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:13:53.577183  275844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:13:53.583727  275844 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:13:53.583745  275844 kubeadm.go:626] restartCluster start
	I0701 23:13:53.583773  275844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:13:53.589812  275844 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.590282  275844 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220701230032-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:53.590469  275844 kubeconfig.go:127] "default-k8s-different-port-20220701230032-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:13:53.590950  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:13:53.592051  275844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:13:53.598266  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.598304  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.605628  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.806026  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.806089  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.814576  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.005749  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.005835  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.013967  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.206355  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.206416  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.215350  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.406581  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.406651  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.415525  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.605755  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.605834  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.614602  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.805813  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.805894  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.814430  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.006748  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.006824  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.015390  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.206606  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.206712  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.215161  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.406468  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.406570  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.415209  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.606590  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.606691  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.615437  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.806738  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.806828  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.815002  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.006349  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.006435  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.014726  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.205912  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.205993  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.214477  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.405750  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.405831  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.414060  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.334779  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:57.835309  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:56.606652  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.606715  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.615356  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.615374  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.615402  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.623156  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.623180  275844 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:13:56.623187  275844 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:13:56.623201  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:13:56.623258  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:56.649113  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:56.649133  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:56.649140  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:56.649146  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:56.649152  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:56.649158  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:56.649164  275844 cri.go:87] found id: ""
	I0701 23:13:56.649169  275844 cri.go:232] Stopping containers: [e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c]
	I0701 23:13:56.649212  275844 ssh_runner.go:195] Run: which crictl
	I0701 23:13:56.652179  275844 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c
	I0701 23:13:56.676014  275844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:13:56.685537  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:13:56.692196  275844 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 23:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul  1 23:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul  1 23:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul  1 23:00 /etc/kubernetes/scheduler.conf
	
	I0701 23:13:56.692247  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0701 23:13:56.698641  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0701 23:13:56.704856  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.711153  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.711210  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.717322  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0701 23:13:56.723423  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.723459  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:13:56.729312  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736598  275844 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736617  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:56.781688  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.445598  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.633371  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.679946  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.749368  275844 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:13:57.749432  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.318180  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.818690  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.830974  275844 api_server.go:71] duration metric: took 1.081606586s to wait for apiserver process to appear ...
	I0701 23:13:58.831001  275844 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:13:58.831034  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:13:58.831436  275844 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0701 23:13:59.331708  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:01.921615  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:01.921654  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.332201  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.336755  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.336792  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.831892  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.836248  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.836275  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:03.331795  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:03.337047  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0701 23:14:03.343503  275844 api_server.go:140] control plane version: v1.24.2
	I0701 23:14:03.343525  275844 api_server.go:130] duration metric: took 4.512518171s to wait for apiserver health ...
	I0701 23:14:03.343535  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:14:03.343540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:14:03.345598  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:13:59.835489  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:02.335364  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:03.347224  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:14:03.350686  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:14:03.350707  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:14:03.363866  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:14:04.295415  275844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:14:04.301798  275844 system_pods.go:59] 9 kube-system pods found
	I0701 23:14:04.301825  275844 system_pods.go:61] "coredns-6d4b75cb6d-zmnqs" [f0e0d22f-cd83-4531-8778-32070816b159] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301837  275844 system_pods.go:61] "etcd-default-k8s-different-port-20220701230032-10066" [c4b3993a-3a6c-4827-8250-b951a48b9432] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:14:04.301844  275844 system_pods.go:61] "kindnet-49h72" [bee4a070-eb2f-45af-a824-f8ebb08e21cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:14:04.301851  275844 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220701230032-10066" [2ce9acd5-e8e7-425b-bb9b-5dd480397910] Running
	I0701 23:14:04.301860  275844 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220701230032-10066" [2fec1fad-34c5-4b47-8713-8e789b816ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:14:04.301868  275844 system_pods.go:61] "kube-proxy-qg5j2" [c67a38f9-ae75-40ea-8992-85a437368c50] Running
	I0701 23:14:04.301873  275844 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220701230032-10066" [49056cd0-4107-4377-ba51-b97af35cbe72] Running
	I0701 23:14:04.301882  275844 system_pods.go:61] "metrics-server-5c6f97fb75-mkq9q" [f5b66095-14d2-4de4-9f1d-2cd5371ec0fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301890  275844 system_pods.go:61] "storage-provisioner" [6e0344bb-c7de-41f4-95d2-f30576ae036c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301898  275844 system_pods.go:74] duration metric: took 6.458628ms to wait for pod list to return data ...
	I0701 23:14:04.301907  275844 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:14:04.304305  275844 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:14:04.304330  275844 node_conditions.go:123] node cpu capacity is 8
	I0701 23:14:04.304343  275844 node_conditions.go:105] duration metric: took 2.432316ms to run NodePressure ...
	I0701 23:14:04.304363  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:14:04.434166  275844 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438097  275844 kubeadm.go:777] kubelet initialised
	I0701 23:14:04.438123  275844 kubeadm.go:778] duration metric: took 3.933976ms waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438131  275844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:04.443068  275844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	I0701 23:14:06.448162  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:04.335402  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:06.335651  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.448866  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:10.948772  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.834525  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:11.335287  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:12.949108  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.448393  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:13.834432  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.835251  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:18.334462  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:17.948235  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:19.948671  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:20.334833  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:22.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:21.948914  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:23.949013  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:24.335241  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.834599  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:28.948377  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:30.948441  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:29.334764  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:31.834659  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:32.948974  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:35.448453  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:33.835115  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:36.334527  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:37.448971  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:39.449007  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:38.834645  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.335647  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.948832  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.948861  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:46.448244  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.834536  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:45.835152  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.448469  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.448941  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.335336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.834828  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.948268  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:54.948294  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:55.334778  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:56.331712  269883 pod_ready.go:81] duration metric: took 4m0.0026135s waiting for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" ...
	E0701 23:14:56.331755  269883 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:14:56.331779  269883 pod_ready.go:38] duration metric: took 4m0.007826908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:56.331809  269883 kubeadm.go:630] restartCluster took 4m10.917993696s
	W0701 23:14:56.331941  269883 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:14:56.331974  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:14:57.984431  269883 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.65243003s)
	I0701 23:14:57.984496  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:14:57.994269  269883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:14:58.001094  269883 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:14:58.001159  269883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:14:58.007683  269883 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:14:58.007734  269883 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:14:56.949272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:58.949543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:01.449627  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:03.950758  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.936698  269883 out.go:204]   - Generating certificates and keys ...
	I0701 23:15:06.939424  269883 out.go:204]   - Booting up control plane ...
	I0701 23:15:06.941904  269883 out.go:204]   - Configuring RBAC rules ...
	I0701 23:15:06.944403  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:15:06.944429  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:15:06.945976  269883 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:15:06.947445  269883 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:15:06.951630  269883 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:15:06.951650  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:15:06.966756  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:15:07.699280  269883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:15:07.699401  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.699419  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=no-preload-20220701225718-10066 minikube.k8s.io/updated_at=2022_07_01T23_15_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.706386  269883 ops.go:34] apiserver oom_adj: -16
	I0701 23:15:07.765556  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.338006  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.448681  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:10.448820  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:08.838005  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.337996  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.837437  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.337629  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.837363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.337763  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.838075  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.338080  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.837649  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:13.337387  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.449226  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:14.948189  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:13.838035  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.337961  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.838063  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.338241  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.837500  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.337613  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.838363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.337701  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.838061  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.337742  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.838306  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.337570  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.837680  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.892044  269883 kubeadm.go:1045] duration metric: took 12.192690701s to wait for elevateKubeSystemPrivileges.
	I0701 23:15:19.892072  269883 kubeadm.go:397] StartCluster complete in 4m34.521249474s
	I0701 23:15:19.892091  269883 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:19.892193  269883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:15:19.893038  269883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:20.407163  269883 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220701225718-10066" rescaled to 1
	I0701 23:15:20.407233  269883 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:15:20.409054  269883 out.go:177] * Verifying Kubernetes components...
	I0701 23:15:20.407277  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:15:20.407307  269883 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:15:20.407455  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:15:20.410261  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:15:20.410307  269883 addons.go:65] Setting dashboard=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410316  269883 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410322  269883 addons.go:65] Setting metrics-server=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410331  269883 addons.go:153] Setting addon dashboard=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.410333  269883 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220701225718-10066"
	W0701 23:15:20.410339  269883 addons.go:162] addon dashboard should already be in state true
	I0701 23:15:20.410339  269883 addons.go:153] Setting addon metrics-server=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410348  269883 addons.go:162] addon metrics-server should already be in state true
	I0701 23:15:20.410378  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410384  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410308  269883 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410415  269883 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410428  269883 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:15:20.410464  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410690  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410883  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410898  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410944  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.462647  269883 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.462859  269883 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.464095  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:15:20.464150  269883 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:15:20.464162  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:15:20.464109  269883 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0701 23:15:20.464170  269883 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:15:20.465490  269883 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.466852  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:15:20.466866  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:15:20.465507  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.466910  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:16.948842  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:18.949526  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:21.448543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:20.468347  269883 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.468364  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:15:20.468412  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.465559  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.467550  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.497855  269883 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:15:20.497910  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:15:20.515144  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.520029  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.522289  269883 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.522310  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:15:20.522357  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.524783  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.568239  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.635327  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.635528  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:15:20.635546  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:15:20.635773  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:15:20.635792  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:15:20.720153  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:15:20.720184  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:15:20.720330  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:15:20.720356  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:15:20.735914  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:15:20.735942  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:15:20.738036  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.738058  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:15:20.751468  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:15:20.751494  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:15:20.751989  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.830998  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:15:20.831029  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:15:20.835184  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.919071  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:15:20.919097  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:15:20.931803  269883 start.go:809] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0701 23:15:20.938634  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:15:20.938663  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:15:21.027932  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:15:21.027961  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:15:21.120018  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.120044  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:15:21.139289  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.542831  269883 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220701225718-10066"
	I0701 23:15:22.318204  269883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.178852341s)
	I0701 23:15:22.320260  269883 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0701 23:15:22.321764  269883 addons.go:414] enableAddons completed in 1.914474598s
	I0701 23:15:22.506049  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:23.449129  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.948784  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.003072  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:27.003942  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:28.448748  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:30.948490  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:29.503567  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:31.503801  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:33.448177  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:35.948336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:33.504159  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:35.504602  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:38.003422  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:37.948379  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:39.948560  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:40.504288  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:42.504480  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:41.949060  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:43.949319  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:46.449018  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:44.504514  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:47.002872  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:48.948340  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:51.448205  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:49.003639  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:51.503660  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:53.448249  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:55.448938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:53.503915  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:56.003212  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:58.003807  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:57.948938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.448920  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.504360  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:03.003336  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:02.449149  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:04.449385  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:05.503324  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:07.503773  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:06.948721  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:09.448775  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:10.003039  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:12.003124  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:11.948462  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.448466  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:16.449003  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.504207  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:17.003682  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:18.948883  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:21.448510  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:19.503321  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:21.503670  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:23.949051  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:26.448494  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:23.504169  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:26.003440  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:28.448711  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:30.950336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:28.503980  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:31.003131  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.003828  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.448272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.448817  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.503530  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.503721  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.449097  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.948158  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.504219  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:42.002779  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:41.948654  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:43.948719  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:46.448800  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:44.003891  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:46.503378  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:48.948666  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:50.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:48.503897  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:51.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:53.448686  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:55.948675  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:53.504221  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:56.003927  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:58.448263  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:00.948090  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:58.503637  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:00.503665  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.504224  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.948518  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:04.948735  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:05.003494  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:07.503949  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:06.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.448480  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:11.448536  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:12.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:13.448566  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:15.948312  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:14.004090  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:16.503717  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:17.948940  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:20.449080  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:18.504348  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:21.002849  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:23.003827  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:22.948356  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:24.949063  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:25.503280  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:27.503458  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:26.949277  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.448968  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.503895  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:32.003296  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:31.948774  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:33.948802  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:36.448693  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:34.003684  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:36.504246  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:38.948200  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:41.449095  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:39.003597  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:41.504297  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:43.948596  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:46.448338  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:44.003653  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:46.003704  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:48.448406  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:50.449049  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:48.503830  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:51.002929  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:52.949418  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:55.448267  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:53.503901  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:56.003435  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:57.948337  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:59.949522  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:58.503409  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:00.504015  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.449005  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:18:04.445635  275844 pod_ready.go:81] duration metric: took 4m0.002536043s waiting for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	E0701 23:18:04.445658  275844 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:18:04.445676  275844 pod_ready.go:38] duration metric: took 4m0.00753476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:18:04.445715  275844 kubeadm.go:630] restartCluster took 4m10.861963713s
	W0701 23:18:04.445855  275844 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:18:04.445882  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:18:06.095490  275844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.649588457s)
	I0701 23:18:06.095547  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:06.104815  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:18:06.112334  275844 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:18:06.112376  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:18:06.119483  275844 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:18:06.119534  275844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:18:06.370658  275844 out.go:204]   - Generating certificates and keys ...
	I0701 23:18:05.003477  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.003973  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.277086  275844 out.go:204]   - Booting up control plane ...
	I0701 23:18:09.503332  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:11.504503  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:14.316275  275844 out.go:204]   - Configuring RBAC rules ...
	I0701 23:18:14.730162  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:18:14.730189  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:18:14.731634  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:18:14.732857  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:18:14.739597  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:18:14.739622  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:18:14.825236  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:18:15.561507  275844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:18:15.561626  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.561637  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066 minikube.k8s.io/updated_at=2022_07_01T23_18_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.568394  275844 ops.go:34] apiserver oom_adj: -16
	I0701 23:18:15.634685  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:16.190642  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:14.002820  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.003780  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.690023  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.190952  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.690163  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.191022  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.690054  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.190723  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.690097  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.190968  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.691032  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:21.190434  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.503619  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:20.504289  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:23.003341  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:21.690038  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.190938  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.690621  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.190651  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.690833  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.190934  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.690962  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.190256  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.690333  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.190101  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.690887  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.190074  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.248216  275844 kubeadm.go:1045] duration metric: took 11.686670316s to wait for elevateKubeSystemPrivileges.
	I0701 23:18:27.248246  275844 kubeadm.go:397] StartCluster complete in 4m33.70628023s
	I0701 23:18:27.248264  275844 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.248355  275844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:18:27.249185  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.763199  275844 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220701230032-10066" rescaled to 1
	I0701 23:18:27.763267  275844 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:18:27.766618  275844 out.go:177] * Verifying Kubernetes components...
	I0701 23:18:27.763306  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:18:27.763330  275844 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:18:27.766747  275844 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766765  275844 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766778  275844 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:18:27.766806  275844 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766825  275844 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766828  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.763473  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:18:27.766824  275844 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768481  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:27.768504  275844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766835  275844 addons.go:162] addon dashboard should already be in state true
	I0701 23:18:27.768632  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.766843  275844 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768713  275844 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.768733  275844 addons.go:162] addon metrics-server should already be in state true
	I0701 23:18:27.768768  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.767332  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.768887  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769184  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769187  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.831262  275844 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:18:27.832550  275844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:18:27.833969  275844 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:27.833992  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:18:27.834040  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.835526  275844 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:18:27.833023  275844 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.837673  275844 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0701 23:18:27.837677  275844 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:18:25.003796  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.504253  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.837692  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:18:27.839084  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:18:27.839099  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:18:27.839108  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:18:27.839153  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.837723  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.839164  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.839691  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.856622  275844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:18:27.856645  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:18:27.890091  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.891200  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.895622  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.896930  275844 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:27.896946  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:18:27.896980  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.937496  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:28.136017  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:28.136703  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:28.139953  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:18:28.139977  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:18:28.144217  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:18:28.144239  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:18:28.234055  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:18:28.234083  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:18:28.318902  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:18:28.318936  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:18:28.336787  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:18:28.336818  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:18:28.423063  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.423089  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:18:28.427844  275844 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0701 23:18:28.432989  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:18:28.433019  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:18:28.442227  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.523695  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:18:28.523727  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:18:28.618333  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:18:28.618365  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:18:28.636855  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:18:28.636885  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:18:28.652952  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:18:28.652974  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:18:28.739775  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:28.739814  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:18:28.832453  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:29.251359  275844 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:29.544427  275844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0701 23:18:29.545959  275844 addons.go:414] enableAddons completed in 1.78263451s
	I0701 23:18:29.863227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:30.003794  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:32.503813  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:31.863254  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.363382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:36.363413  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.504191  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:37.003581  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:38.363717  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:40.863294  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:39.504225  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.003356  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.863457  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:45.363613  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:44.003625  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:46.504247  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:47.863096  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.863849  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.003291  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:51.003453  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:52.363545  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:54.363732  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:53.504320  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.003487  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.862624  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.863111  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:00.863425  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.504264  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:00.504489  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.003398  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.363680  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.363957  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.004021  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.503771  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.364035  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:09.364588  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:10.003129  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:12.003382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:11.863661  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.362895  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:16.363322  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.504382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:17.003939  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:19.503019  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:20.505831  269883 node_ready.go:38] duration metric: took 4m0.007935364s waiting for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:19:20.507971  269883 out.go:177] 
	W0701 23:19:20.509514  269883 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:19:20.509536  269883 out.go:239] * 
	W0701 23:19:20.510312  269883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:19:20.511951  269883 out.go:177] 
	I0701 23:19:18.363478  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:20.863309  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:23.362826  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:25.363077  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:27.863010  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:29.863599  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:31.863690  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:34.363405  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:36.862844  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:39.363009  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:41.863136  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:43.863182  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:46.362920  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:48.363519  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:50.365995  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:52.863524  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:55.363287  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:57.363494  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:59.863734  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:02.362902  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:04.363417  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:06.863299  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:08.863390  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:11.363598  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:13.863329  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:16.363213  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:18.363246  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:20.862846  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:22.863412  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:25.363572  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:27.863611  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:29.863926  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:32.363408  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:34.363894  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:36.863454  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:39.363389  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:41.363918  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:43.364119  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:45.863734  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:48.363224  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:50.862933  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:52.863303  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:54.863540  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:57.363333  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:59.363619  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:01.863747  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:04.363462  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:06.863642  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:09.363229  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:11.863382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:14.363453  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:16.363483  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:18.863559  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:20.863852  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:23.363579  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:25.863700  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:27.863820  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:30.363502  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:32.365183  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:34.862977  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:36.863647  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:39.363489  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:41.862636  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:43.863818  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:46.362854  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:48.363608  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:50.863761  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:53.363511  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:55.363792  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:57.863460  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:00.363227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:02.863069  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:04.863654  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:06.863767  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:09.362775  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:11.363266  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:13.363390  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:15.863386  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:18.363719  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:20.363796  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:22.863167  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:24.863249  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:27.362843  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:27.865277  275844 node_ready.go:38] duration metric: took 4m0.008613758s waiting for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:22:27.867660  275844 out.go:177] 
	W0701 23:22:27.869191  275844 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:22:27.869208  275844 out.go:239] * 
	W0701 23:22:27.869949  275844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:22:27.871815  275844 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8c5a1d8d3616e       6fb66cd78abfe       45 seconds ago      Running             kindnet-cni               4                   87b8e95bd6528
	0aa91897baae9       6fb66cd78abfe       4 minutes ago       Exited              kindnet-cni               3                   87b8e95bd6528
	b49ce69e2c582       a634548d10b03       13 minutes ago      Running             kube-proxy                0                   b58699e3af072
	becf96e8231dc       aebe758cef4cd       13 minutes ago      Running             etcd                      2                   2f69dd21fb9f2
	ab7802906a7b0       d3377ffb7177c       13 minutes ago      Running             kube-apiserver            2                   55afd0afff51f
	0efd5173ba061       34cdf99b1bb3b       13 minutes ago      Running             kube-controller-manager   2                   86e1c2cbbd62a
	62574f0759001       5d725196c1f47       13 minutes ago      Running             kube-scheduler            2                   96e658a134b04
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 23:10:29 UTC, end at Fri 2022-07-01 23:28:23 UTC. --
	Jul 01 23:20:42 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:20:42.686851799Z" level=info msg="RemoveContainer for \"b44647d75d006da4be7e60086562915fbe16a84409b56d9fce3085c750f919d9\" returns successfully"
	Jul 01 23:20:57 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:20:57.023478521Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jul 01 23:20:57 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:20:57.036023840Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"6647d2fbba404c702a0686cb668c08202be20d0402f0be624efb647fd50fbd3e\""
	Jul 01 23:20:57 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:20:57.036488483Z" level=info msg="StartContainer for \"6647d2fbba404c702a0686cb668c08202be20d0402f0be624efb647fd50fbd3e\""
	Jul 01 23:20:57 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:20:57.137662132Z" level=info msg="StartContainer for \"6647d2fbba404c702a0686cb668c08202be20d0402f0be624efb647fd50fbd3e\" returns successfully"
	Jul 01 23:23:37 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:23:37.567093826Z" level=info msg="shim disconnected" id=6647d2fbba404c702a0686cb668c08202be20d0402f0be624efb647fd50fbd3e
	Jul 01 23:23:37 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:23:37.567159097Z" level=warning msg="cleaning up after shim disconnected" id=6647d2fbba404c702a0686cb668c08202be20d0402f0be624efb647fd50fbd3e namespace=k8s.io
	Jul 01 23:23:37 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:23:37.567173584Z" level=info msg="cleaning up dead shim"
	Jul 01 23:23:37 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:23:37.576517826Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:23:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4400 runtime=io.containerd.runc.v2\n"
	Jul 01 23:23:37 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:23:37.999569332Z" level=info msg="RemoveContainer for \"7a293987606973e87df061b7f552dc0eb5a70ea9394f2d383228f9c2d3742d5d\""
	Jul 01 23:23:38 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:23:38.004445227Z" level=info msg="RemoveContainer for \"7a293987606973e87df061b7f552dc0eb5a70ea9394f2d383228f9c2d3742d5d\" returns successfully"
	Jul 01 23:24:04 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:24:04.023519436Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jul 01 23:24:04 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:24:04.037441199Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d\""
	Jul 01 23:24:04 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:24:04.037935784Z" level=info msg="StartContainer for \"0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d\""
	Jul 01 23:24:04 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:24:04.121862338Z" level=info msg="StartContainer for \"0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d\" returns successfully"
	Jul 01 23:26:44 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:26:44.454689733Z" level=info msg="shim disconnected" id=0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d
	Jul 01 23:26:44 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:26:44.454770230Z" level=warning msg="cleaning up after shim disconnected" id=0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d namespace=k8s.io
	Jul 01 23:26:44 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:26:44.454790016Z" level=info msg="cleaning up dead shim"
	Jul 01 23:26:44 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:26:44.463999527Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:26:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4505 runtime=io.containerd.runc.v2\n"
	Jul 01 23:26:45 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:26:45.340332715Z" level=info msg="RemoveContainer for \"6647d2fbba404c702a0686cb668c08202be20d0402f0be624efb647fd50fbd3e\""
	Jul 01 23:26:45 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:26:45.344745193Z" level=info msg="RemoveContainer for \"6647d2fbba404c702a0686cb668c08202be20d0402f0be624efb647fd50fbd3e\" returns successfully"
	Jul 01 23:27:38 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:27:38.023105532Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jul 01 23:27:38 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:27:38.035226693Z" level=info msg="CreateContainer within sandbox \"87b8e95bd6528609118b532731d4452a52308b23651091cdaf220166488ae104\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"8c5a1d8d3616ec511e9d19c6604a988dcaaea51335507ee9f581e6338728aaa4\""
	Jul 01 23:27:38 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:27:38.035649085Z" level=info msg="StartContainer for \"8c5a1d8d3616ec511e9d19c6604a988dcaaea51335507ee9f581e6338728aaa4\""
	Jul 01 23:27:38 no-preload-20220701225718-10066 containerd[394]: time="2022-07-01T23:27:38.133649666Z" level=info msg="StartContainer for \"8c5a1d8d3616ec511e9d19c6604a988dcaaea51335507ee9f581e6338728aaa4\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220701225718-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220701225718-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=no-preload-20220701225718-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T23_15_07_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 23:15:03 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220701225718-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:28:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:25:28 +0000   Fri, 01 Jul 2022 23:15:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:25:28 +0000   Fri, 01 Jul 2022 23:15:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:25:28 +0000   Fri, 01 Jul 2022 23:15:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:25:28 +0000   Fri, 01 Jul 2022 23:15:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220701225718-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                82dabe3f-d133-4afb-a4d2-ee1450b85ce0
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220701225718-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-7kwfz                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-no-preload-20220701225718-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-20220701225718-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-5mclw                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-20220701225718-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-20220701225718-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-20220701225718-10066 event: Registered Node no-preload-20220701225718-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [becf96e8231dc4efb269b660148e06fcc627b6ed8e784d88e605bc513ffa4068] <==
	* {"level":"info","ts":"2022-07-01T23:15:00.442Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-01T23:15:00.442Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2022-07-01T23:15:00.442Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2022-07-01T23:15:00.443Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-01T23:15:00.443Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2022-07-01T23:15:01.433Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:15:01.434Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-20220701225718-10066 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T23:15:01.435Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T23:15:01.436Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2022-07-01T23:15:01.436Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-01T23:25:01.449Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":549}
	{"level":"info","ts":"2022-07-01T23:25:01.449Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":549,"took":"485.093µs"}
	
	* 
	* ==> kernel <==
	*  23:28:24 up  1:10,  0 users,  load average: 0.29, 0.32, 0.76
	Linux no-preload-20220701225718-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [ab7802906a7b09692d38717ace4669db0b1201b927c68b01400a1e45e6dae90b] <==
	* W0701 23:23:04.672526       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:23:04.672587       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:23:04.672601       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:25:04.677117       1 handler_proxy.go:102] no RequestInfo found in the context
	W0701 23:25:04.677145       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:25:04.677176       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:25:04.677183       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0701 23:25:04.677189       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:25:04.678313       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:26:04.678238       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:26:04.678283       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:26:04.678295       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:26:04.678455       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:26:04.678534       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:26:04.680328       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:28:04.679028       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:28:04.679075       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:28:04.679083       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:28:04.681289       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:28:04.681358       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:28:04.681384       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0efd5173ba0612f8f1dec4c85b81dd59f286dfe8f01d588eb70b05fb32f2f7f0] <==
	* W0701 23:22:19.532681       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:22:49.000753       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:22:49.546091       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:23:19.011483       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:23:19.559488       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:23:49.021902       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:23:49.573224       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:24:19.032045       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:24:19.588484       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:24:49.041347       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:24:49.602195       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:25:19.050135       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:25:19.616733       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:25:49.059485       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:25:49.631169       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:26:19.083208       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:26:19.646469       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:26:49.095862       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:26:49.661815       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:27:19.120568       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:27:19.677179       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:27:49.131519       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:27:49.692327       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:28:19.154316       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:28:19.707459       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b49ce69e2c58257158688cb2227d8b0dda1b0aa00833f9b3b42a772e1765f35b] <==
	* I0701 23:15:20.122786       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0701 23:15:20.122840       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0701 23:15:20.122872       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 23:15:20.145399       1 server_others.go:206] "Using iptables Proxier"
	I0701 23:15:20.145447       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 23:15:20.145461       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 23:15:20.145476       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 23:15:20.145520       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:15:20.145701       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:15:20.145952       1 server.go:661] "Version info" version="v1.24.2"
	I0701 23:15:20.145976       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 23:15:20.146765       1 config.go:317] "Starting service config controller"
	I0701 23:15:20.146789       1 config.go:226] "Starting endpoint slice config controller"
	I0701 23:15:20.146803       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 23:15:20.146812       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 23:15:20.146944       1 config.go:444] "Starting node config controller"
	I0701 23:15:20.146972       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 23:15:20.247840       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 23:15:20.247852       1 shared_informer.go:262] Caches are synced for service config
	I0701 23:15:20.247946       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [62574f07590010ac5157c1dbc72d41c9fd2a0b4834828193da39400420cee4b4] <==
	* W0701 23:15:03.741505       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 23:15:03.741853       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:15:03.741949       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:15:03.741632       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 23:15:03.742068       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 23:15:03.742249       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 23:15:03.743167       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 23:15:03.743203       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 23:15:03.743413       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 23:15:03.743445       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 23:15:03.743525       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 23:15:03.743548       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 23:15:03.743728       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 23:15:03.743767       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 23:15:04.645805       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 23:15:04.645874       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 23:15:04.709168       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 23:15:04.709203       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 23:15:04.738349       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 23:15:04.738392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 23:15:04.818092       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 23:15:04.818135       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 23:15:04.818318       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 23:15:04.818372       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0701 23:15:07.032640       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 23:10:29 UTC, end at Fri 2022-07-01 23:28:24 UTC. --
	Jul 01 23:26:57 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:26:57.292805    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:00 no-preload-20220701225718-10066 kubelet[3057]: I0701 23:27:00.020035    3057 scope.go:110] "RemoveContainer" containerID="0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d"
	Jul 01 23:27:00 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:00.020410    3057 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7kwfz_kube-system(c2ec92c0-3a08-45d8-aeb8-b2a4b5cb6e2a)\"" pod="kube-system/kindnet-7kwfz" podUID=c2ec92c0-3a08-45d8-aeb8-b2a4b5cb6e2a
	Jul 01 23:27:02 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:02.294474    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:07 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:07.295535    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:12 no-preload-20220701225718-10066 kubelet[3057]: I0701 23:27:12.020317    3057 scope.go:110] "RemoveContainer" containerID="0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d"
	Jul 01 23:27:12 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:12.020584    3057 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7kwfz_kube-system(c2ec92c0-3a08-45d8-aeb8-b2a4b5cb6e2a)\"" pod="kube-system/kindnet-7kwfz" podUID=c2ec92c0-3a08-45d8-aeb8-b2a4b5cb6e2a
	Jul 01 23:27:12 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:12.297045    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:17 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:17.298061    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:22 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:22.299278    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:23 no-preload-20220701225718-10066 kubelet[3057]: I0701 23:27:23.019725    3057 scope.go:110] "RemoveContainer" containerID="0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d"
	Jul 01 23:27:23 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:23.019988    3057 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7kwfz_kube-system(c2ec92c0-3a08-45d8-aeb8-b2a4b5cb6e2a)\"" pod="kube-system/kindnet-7kwfz" podUID=c2ec92c0-3a08-45d8-aeb8-b2a4b5cb6e2a
	Jul 01 23:27:27 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:27.300117    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:32 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:32.300980    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:37 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:37.302205    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:38 no-preload-20220701225718-10066 kubelet[3057]: I0701 23:27:38.020634    3057 scope.go:110] "RemoveContainer" containerID="0aa91897baae9b488aa95bb47d7b6dd336cb806bf6069fb97d643d03ac7b674d"
	Jul 01 23:27:42 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:42.303839    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:47 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:47.304569    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:52 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:52.305242    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:27:57 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:27:57.306612    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:28:02 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:28:02.307687    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:28:07 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:28:07.308625    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:28:12 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:28:12.309600    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:28:17 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:28:17.310946    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:28:22 no-preload-20220701225718-10066 kubelet[3057]: E0701 23:28:22.311772    3057 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-6jmqb metrics-server-5c6f97fb75-629bq storage-provisioner dashboard-metrics-scraper-dffd48c4c-bjxjm kubernetes-dashboard-5fd5574d9f-7vh58
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-6jmqb metrics-server-5c6f97fb75-629bq storage-provisioner dashboard-metrics-scraper-dffd48c4c-bjxjm kubernetes-dashboard-5fd5574d9f-7vh58
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-6jmqb metrics-server-5c6f97fb75-629bq storage-provisioner dashboard-metrics-scraper-dffd48c4c-bjxjm kubernetes-dashboard-5fd5574d9f-7vh58: exit status 1 (54.765651ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-6jmqb" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-629bq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-bjxjm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-7vh58" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220701225718-10066 describe pod coredns-6d4b75cb6d-6jmqb metrics-server-5c6f97fb75-629bq storage-provisioner dashboard-metrics-scraper-dffd48c4c-bjxjm kubernetes-dashboard-5fd5574d9f-7vh58: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-7klw9" [b7fd4967-4115-4ad2-b9af-0fed0ff6449e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0701 23:22:32.035047   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 23:22:38.757414   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/client.crt: no such file or directory
E0701 23:23:06.442071   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/old-k8s-version-20220701225700-10066/client.crt: no such file or directory
E0701 23:23:11.918968   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:24:56.556220   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 23:25:13.509163   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 23:25:34.503270   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 23:25:43.467399   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:25:51.855590   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:26:42.423179   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:26:47.696929   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:27:00.872798   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:27:15.083338   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0701 23:30:34.504029   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0701 23:30:43.467331   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0701 23:30:51.854702   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-07-01 23:31:30.258190272 +0000 UTC m=+4079.563127817
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 describe po kubernetes-dashboard-5fd5574d9f-7klw9 -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220701230032-10066 describe po kubernetes-dashboard-5fd5574d9f-7klw9 -n kubernetes-dashboard: context deadline exceeded (1.667µs)
start_stop_delete_test.go:274: kubectl --context default-k8s-different-port-20220701230032-10066 describe po kubernetes-dashboard-5fd5574d9f-7klw9 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 logs kubernetes-dashboard-5fd5574d9f-7klw9 -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220701230032-10066 logs kubernetes-dashboard-5fd5574d9f-7klw9 -n kubernetes-dashboard: context deadline exceeded (258ns)
start_stop_delete_test.go:274: kubectl --context default-k8s-different-port-20220701230032-10066 logs kubernetes-dashboard-5fd5574d9f-7klw9 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220701230032-10066
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220701230032-10066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93",
	        "Created": "2022-07-01T23:00:40.408283404Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-01T23:13:37.253165486Z",
	            "FinishedAt": "2022-07-01T23:13:35.896385451Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hostname",
	        "HostsPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/hosts",
	        "LogPath": "/var/lib/docker/containers/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93/261fd4f89726dc2c58f53acbc102f1a8ed83a81432372f6e8174c5dc4e88ba93-json.log",
	        "Name": "/default-k8s-different-port-20220701230032-10066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220701230032-10066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220701230032-10066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee8e48f45f2d547450f4daebe89b962ba00d4c3c0e85728311b87d0be50d5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220701230032-10066",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220701230032-10066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220701230032-10066",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220701230032-10066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a2ba47ae2c38208a3afb09c62b2914d723cce37fbff94b39953ca0016b34bc8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49439"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a2ba47ae2c38",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220701230032-10066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "261fd4f89726",
	                        "default-k8s-different-port-20220701230032-10066"
	                    ],
	                    "NetworkID": "08b054338871e09e9987c4187ebe43c21ee49646be113b14ac2205c8647ea77d",
	                    "EndpointID": "9c2cdcd15c5d5bebda898002f555e5c0adc6dc1d266a40af76b7e4a391cd8cc6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220701230032-10066 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:05 UTC |
	|         | embed-certs-20220701225830-10066                           |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:05 UTC | 01 Jul 22 23:06 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:06 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220701230537-10066 --memory=2200           | minikube | jenkins | v1.26.0 | 01 Jul 22 23:06 UTC | 01 Jul 22 23:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:07 UTC | 01 Jul 22 23:07 UTC |
	|         | newest-cni-20220701230537-10066                            |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC | 01 Jul 22 23:10 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:10 UTC |                     |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --preload=false                                |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC |                     |
	|         | default-k8s-different-port-20220701230032-10066            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:13 UTC | 01 Jul 22 23:13 UTC |
	|         | old-k8s-version-20220701225700-10066                       |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 01 Jul 22 23:28 UTC | 01 Jul 22 23:28 UTC |
	|         | no-preload-20220701225718-10066                            |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 23:13:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 23:13:36.508585  275844 out.go:296] Setting OutFile to fd 1 ...
	I0701 23:13:36.508812  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.508825  275844 out.go:309] Setting ErrFile to fd 2...
	I0701 23:13:36.508833  275844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 23:13:36.509394  275844 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 23:13:36.509707  275844 out.go:303] Setting JSON to false
	I0701 23:13:36.511123  275844 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3370,"bootTime":1656713847,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 23:13:36.511210  275844 start.go:125] virtualization: kvm guest
	I0701 23:13:36.513852  275844 out.go:177] * [default-k8s-different-port-20220701230032-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 23:13:36.516346  275844 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 23:13:36.516221  275844 notify.go:193] Checking for updates...
	I0701 23:13:36.517990  275844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 23:13:36.519337  275844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:36.520961  275844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 23:13:36.522517  275844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 23:13:36.524336  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:36.524783  275844 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 23:13:36.571678  275844 docker.go:137] docker version: linux-20.10.17
	I0701 23:13:36.571797  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.688003  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.603240517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.688097  275844 docker.go:254] overlay module found
	I0701 23:13:36.689718  275844 out.go:177] * Using the docker driver based on existing profile
	I0701 23:13:36.691073  275844 start.go:284] selected driver: docker
	I0701 23:13:36.691091  275844 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.691176  275844 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 23:13:36.711421  275844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 23:13:36.815393  275844 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 23:13:36.741940503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 23:13:36.815669  275844 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 23:13:36.815700  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:36.815708  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:36.815734  275844 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:36.817973  275844 out.go:177] * Starting control plane node default-k8s-different-port-20220701230032-10066 in cluster default-k8s-different-port-20220701230032-10066
	I0701 23:13:36.819338  275844 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 23:13:36.820691  275844 out.go:177] * Pulling base image ...
	I0701 23:13:36.821863  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:36.821911  275844 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 23:13:36.821925  275844 cache.go:57] Caching tarball of preloaded images
	I0701 23:13:36.821988  275844 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 23:13:36.822107  275844 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 23:13:36.822124  275844 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0701 23:13:36.822229  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:36.857028  275844 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0701 23:13:36.857061  275844 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0701 23:13:36.857085  275844 cache.go:208] Successfully downloaded all kic artifacts
	I0701 23:13:36.857128  275844 start.go:352] acquiring machines lock for default-k8s-different-port-20220701230032-10066: {Name:mk7518221e8259d073969ba977a5dbef99fe5935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 23:13:36.857241  275844 start.go:356] acquired machines lock for "default-k8s-different-port-20220701230032-10066" in 79.413µs
	I0701 23:13:36.857265  275844 start.go:94] Skipping create...Using existing machine configuration
	I0701 23:13:36.857273  275844 fix.go:55] fixHost starting: 
	I0701 23:13:36.857565  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:36.889959  275844 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220701230032-10066: state=Stopped err=<nil>
	W0701 23:13:36.890003  275844 fix.go:129] unexpected machine state, will restart: <nil>
	I0701 23:13:36.892196  275844 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220701230032-10066" ...
	I0701 23:13:34.335098  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.335670  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:36.893583  275844 cli_runner.go:164] Run: docker start default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.260876  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:13:37.298699  275844 kic.go:416] container "default-k8s-different-port-20220701230032-10066" state is running.
	I0701 23:13:37.299071  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.333911  275844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/config.json ...
	I0701 23:13:37.334149  275844 machine.go:88] provisioning docker machine ...
	I0701 23:13:37.334173  275844 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220701230032-10066"
	I0701 23:13:37.334223  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:37.368604  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:37.368836  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:37.368867  275844 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220701230032-10066 && echo "default-k8s-different-port-20220701230032-10066" | sudo tee /etc/hostname
	I0701 23:13:37.369499  275844 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35278->127.0.0.1:49442: read: connection reset by peer
	I0701 23:13:40.494516  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220701230032-10066
	
	I0701 23:13:40.494611  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.527972  275844 main.go:134] libmachine: Using SSH client type: native
	I0701 23:13:40.528160  275844 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0701 23:13:40.528184  275844 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220701230032-10066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220701230032-10066/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220701230032-10066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 23:13:40.641942  275844 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0701 23:13:40.641973  275844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
	I0701 23:13:40.642000  275844 ubuntu.go:177] setting up certificates
	I0701 23:13:40.642011  275844 provision.go:83] configureAuth start
	I0701 23:13:40.642064  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.675855  275844 provision.go:138] copyHostCerts
	I0701 23:13:40.675913  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
	I0701 23:13:40.675927  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
	I0701 23:13:40.675991  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
	I0701 23:13:40.676060  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
	I0701 23:13:40.676071  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
	I0701 23:13:40.676098  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
	I0701 23:13:40.676148  275844 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
	I0701 23:13:40.676158  275844 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
	I0701 23:13:40.676192  275844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
	I0701 23:13:40.676235  275844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220701230032-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220701230032-10066]
	I0701 23:13:40.954393  275844 provision.go:172] copyRemoteCerts
	I0701 23:13:40.954451  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 23:13:40.954482  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:40.989611  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.073447  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 23:13:41.090826  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0701 23:13:41.107547  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 23:13:41.124219  275844 provision.go:86] duration metric: configureAuth took 482.194415ms
	I0701 23:13:41.124245  275844 ubuntu.go:193] setting minikube options for container-runtime
	I0701 23:13:41.124417  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:13:41.124431  275844 machine.go:91] provisioned docker machine in 3.790266635s
	I0701 23:13:41.124441  275844 start.go:306] post-start starting for "default-k8s-different-port-20220701230032-10066" (driver="docker")
	I0701 23:13:41.124452  275844 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 23:13:41.124510  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 23:13:41.124554  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.158325  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.245657  275844 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 23:13:41.248516  275844 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 23:13:41.248538  275844 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 23:13:41.248546  275844 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 23:13:41.248551  275844 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0701 23:13:41.248559  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
	I0701 23:13:41.248598  275844 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
	I0701 23:13:41.248664  275844 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
	I0701 23:13:41.248742  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 23:13:41.255535  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:41.272444  275844 start.go:309] post-start completed in 147.990653ms
	I0701 23:13:41.272501  275844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 23:13:41.272534  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.306973  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.391227  275844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 23:13:41.395145  275844 fix.go:57] fixHost completed within 4.53786816s
	I0701 23:13:41.395167  275844 start.go:81] releasing machines lock for "default-k8s-different-port-20220701230032-10066", held for 4.537914302s
	I0701 23:13:41.395240  275844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428938  275844 ssh_runner.go:195] Run: systemctl --version
	I0701 23:13:41.428983  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.428986  275844 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0701 23:13:41.429036  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:13:41.463442  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:41.464061  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:13:38.835336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.334767  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:43.334801  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:41.546236  275844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 23:13:41.557434  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 23:13:41.566944  275844 docker.go:179] disabling docker service ...
	I0701 23:13:41.566994  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 23:13:41.575898  275844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 23:13:41.584165  275844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 23:13:41.651388  275844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 23:13:41.723308  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 23:13:41.731887  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 23:13:41.744366  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.752324  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.760056  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0701 23:13:41.767864  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0701 23:13:41.775399  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0701 23:13:41.782555  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0701 23:13:41.794357  275844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 23:13:41.800246  275844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 23:13:41.806090  275844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 23:13:41.881056  275844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 23:13:41.950865  275844 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0701 23:13:41.950932  275844 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0701 23:13:41.955104  275844 start.go:471] Will wait 60s for crictl version
	I0701 23:13:41.955155  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:41.981690  275844 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-01T23:13:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0701 23:13:45.834614  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:47.835771  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:53.029041  275844 ssh_runner.go:195] Run: sudo crictl version
	I0701 23:13:53.051421  275844 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0701 23:13:53.051470  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.078982  275844 ssh_runner.go:195] Run: containerd --version
	I0701 23:13:53.109597  275844 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0701 23:13:50.335036  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:52.834973  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:53.110955  275844 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220701230032-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 23:13:53.143106  275844 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 23:13:53.146306  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.155228  275844 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 23:13:53.155287  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.177026  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.177047  275844 containerd.go:461] Images already preloaded, skipping extraction
	I0701 23:13:53.177094  275844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 23:13:53.198475  275844 containerd.go:547] all images are preloaded for containerd runtime.
	I0701 23:13:53.198501  275844 cache_images.go:84] Images are preloaded, skipping loading
	I0701 23:13:53.198643  275844 ssh_runner.go:195] Run: sudo crictl info
	I0701 23:13:53.221518  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:13:53.221540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:13:53.221552  275844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 23:13:53.221564  275844 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220701230032-10066 NodeName:default-k8s-different-port-20220701230032-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0701 23:13:53.221715  275844 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220701230032-10066"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 23:13:53.221814  275844 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220701230032-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0701 23:13:53.221875  275844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0701 23:13:53.228898  275844 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 23:13:53.228952  275844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 23:13:53.235366  275844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0701 23:13:53.247371  275844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 23:13:53.259313  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0701 23:13:53.271530  275844 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 23:13:53.274142  275844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 23:13:53.282892  275844 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066 for IP: 192.168.76.2
	I0701 23:13:53.282980  275844 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
	I0701 23:13:53.283015  275844 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
	I0701 23:13:53.283078  275844 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/client.key
	I0701 23:13:53.283124  275844 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key.31bdca25
	I0701 23:13:53.283163  275844 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key
	I0701 23:13:53.283252  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
	W0701 23:13:53.283280  275844 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
	I0701 23:13:53.283295  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 23:13:53.283320  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
	I0701 23:13:53.283343  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
	I0701 23:13:53.283367  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
	I0701 23:13:53.283409  275844 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
	I0701 23:13:53.283939  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 23:13:53.300388  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 23:13:53.317215  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 23:13:53.333335  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/default-k8s-different-port-20220701230032-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0701 23:13:53.349529  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 23:13:53.365494  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0701 23:13:53.381103  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 23:13:53.396977  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 23:13:53.412881  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 23:13:53.429709  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
	I0701 23:13:53.446017  275844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
	I0701 23:13:53.461814  275844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 23:13:53.473437  275844 ssh_runner.go:195] Run: openssl version
	I0701 23:13:53.478032  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 23:13:53.484818  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487660  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul  1 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.487710  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 23:13:53.492105  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 23:13:53.498584  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
	I0701 23:13:53.505448  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508315  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul  1 22:28 /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.508365  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
	I0701 23:13:53.512833  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
	I0701 23:13:53.519315  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
	I0701 23:13:53.526653  275844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529618  275844 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul  1 22:28 /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.529700  275844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
	I0701 23:13:53.534593  275844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 23:13:53.541972  275844 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220701230032-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220701230032-1006
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 23:13:53.542071  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0701 23:13:53.542137  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:53.565066  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:53.565094  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:53.565103  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:53.565110  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:53.565115  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:53.565121  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:53.565127  275844 cri.go:87] found id: ""
	I0701 23:13:53.565155  275844 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0701 23:13:53.577099  275844 cri.go:114] JSON = null
	W0701 23:13:53.577140  275844 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0701 23:13:53.577183  275844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 23:13:53.583727  275844 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0701 23:13:53.583745  275844 kubeadm.go:626] restartCluster start
	I0701 23:13:53.583773  275844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 23:13:53.589812  275844 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.590282  275844 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220701230032-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:13:53.590469  275844 kubeconfig.go:127] "default-k8s-different-port-20220701230032-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
	I0701 23:13:53.590950  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:13:53.592051  275844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 23:13:53.598266  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.598304  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.605628  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:53.806026  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:53.806089  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:53.814576  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.005749  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.005835  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.013967  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.206355  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.206416  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.215350  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.406581  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.406651  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.415525  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.605755  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.605834  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.614602  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:54.805813  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:54.805894  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:54.814430  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.006748  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.006824  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.015390  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.206606  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.206712  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.215161  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.406468  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.406570  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.415209  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.606590  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.606691  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.615437  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.806738  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:55.806828  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:55.815002  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.006349  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.006435  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.014726  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.205912  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.205993  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.214477  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.405750  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.405831  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.414060  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:55.334779  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:57.835309  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:13:56.606652  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.606715  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.615356  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.615374  275844 api_server.go:165] Checking apiserver status ...
	I0701 23:13:56.615402  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 23:13:56.623156  275844 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.623180  275844 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0701 23:13:56.623187  275844 kubeadm.go:1092] stopping kube-system containers ...
	I0701 23:13:56.623201  275844 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0701 23:13:56.623258  275844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 23:13:56.649113  275844 cri.go:87] found id: "e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e"
	I0701 23:13:56.649133  275844 cri.go:87] found id: "b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7"
	I0701 23:13:56.649140  275844 cri.go:87] found id: "50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52"
	I0701 23:13:56.649146  275844 cri.go:87] found id: "f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c"
	I0701 23:13:56.649152  275844 cri.go:87] found id: "a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2"
	I0701 23:13:56.649158  275844 cri.go:87] found id: "042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c"
	I0701 23:13:56.649164  275844 cri.go:87] found id: ""
	I0701 23:13:56.649169  275844 cri.go:232] Stopping containers: [e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c]
	I0701 23:13:56.649212  275844 ssh_runner.go:195] Run: which crictl
	I0701 23:13:56.652179  275844 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e85724e2f1379e9f72855b4c17e5be37e5dd4dff536c80d299bd5241649a265e b63fba32c68cc99cdb8950d8050e7ca6943c6a72828966d71b55fafb9d91f6e7 50e0bf3dbb8c17ae34fc33304272cac9c55348eb81d6ea8d3d628b572f381d52 f41d2b7f1a0c99c9045814b5dd2fd1eab002640d3aa9e20bb5668fcb7f9b058c a349e45d95bb63f7035c09ee018a2e4c53951c55880e50380e1aea99c14be6f2 042166814f4c8d60a0dbbbff1af21c2918ff8952bc19ad1f8ec5dbfdabb7730c
	I0701 23:13:56.676014  275844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 23:13:56.685537  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:13:56.692196  275844 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  1 23:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul  1 23:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul  1 23:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul  1 23:00 /etc/kubernetes/scheduler.conf
	
	I0701 23:13:56.692247  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0701 23:13:56.698641  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0701 23:13:56.704856  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.711153  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.711210  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 23:13:56.717322  275844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0701 23:13:56.723423  275844 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 23:13:56.723459  275844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 23:13:56.729312  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736598  275844 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 23:13:56.736617  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:56.781688  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.445598  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.633371  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.679946  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:13:57.749368  275844 api_server.go:51] waiting for apiserver process to appear ...
	I0701 23:13:57.749432  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.318180  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.818690  275844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 23:13:58.830974  275844 api_server.go:71] duration metric: took 1.081606586s to wait for apiserver process to appear ...
	I0701 23:13:58.831001  275844 api_server.go:87] waiting for apiserver healthz status ...
	I0701 23:13:58.831034  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:13:58.831436  275844 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0701 23:13:59.331708  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:01.921615  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:01.921654  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.332201  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.336755  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.336792  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:02.831892  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:02.836248  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 23:14:02.836275  275844 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 23:14:03.331795  275844 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0701 23:14:03.337047  275844 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0701 23:14:03.343503  275844 api_server.go:140] control plane version: v1.24.2
	I0701 23:14:03.343525  275844 api_server.go:130] duration metric: took 4.512518171s to wait for apiserver health ...
	I0701 23:14:03.343535  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:14:03.343540  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:14:03.345598  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:13:59.835489  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:02.335364  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:03.347224  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:14:03.350686  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:14:03.350707  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:14:03.363866  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:14:04.295415  275844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 23:14:04.301798  275844 system_pods.go:59] 9 kube-system pods found
	I0701 23:14:04.301825  275844 system_pods.go:61] "coredns-6d4b75cb6d-zmnqs" [f0e0d22f-cd83-4531-8778-32070816b159] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301837  275844 system_pods.go:61] "etcd-default-k8s-different-port-20220701230032-10066" [c4b3993a-3a6c-4827-8250-b951a48b9432] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 23:14:04.301844  275844 system_pods.go:61] "kindnet-49h72" [bee4a070-eb2f-45af-a824-f8ebb08e21cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0701 23:14:04.301851  275844 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220701230032-10066" [2ce9acd5-e8e7-425b-bb9b-5dd480397910] Running
	I0701 23:14:04.301860  275844 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220701230032-10066" [2fec1fad-34c5-4b47-8713-8e789b816ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 23:14:04.301868  275844 system_pods.go:61] "kube-proxy-qg5j2" [c67a38f9-ae75-40ea-8992-85a437368c50] Running
	I0701 23:14:04.301873  275844 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220701230032-10066" [49056cd0-4107-4377-ba51-b97af35cbe72] Running
	I0701 23:14:04.301882  275844 system_pods.go:61] "metrics-server-5c6f97fb75-mkq9q" [f5b66095-14d2-4de4-9f1d-2cd5371ec0fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301890  275844 system_pods.go:61] "storage-provisioner" [6e0344bb-c7de-41f4-95d2-f30576ae036c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0701 23:14:04.301898  275844 system_pods.go:74] duration metric: took 6.458628ms to wait for pod list to return data ...
	I0701 23:14:04.301907  275844 node_conditions.go:102] verifying NodePressure condition ...
	I0701 23:14:04.304305  275844 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0701 23:14:04.304330  275844 node_conditions.go:123] node cpu capacity is 8
	I0701 23:14:04.304343  275844 node_conditions.go:105] duration metric: took 2.432316ms to run NodePressure ...
	I0701 23:14:04.304363  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 23:14:04.434166  275844 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438097  275844 kubeadm.go:777] kubelet initialised
	I0701 23:14:04.438123  275844 kubeadm.go:778] duration metric: took 3.933976ms waiting for restarted kubelet to initialise ...
	I0701 23:14:04.438131  275844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:04.443068  275844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	I0701 23:14:06.448162  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:04.335402  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:06.335651  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.448866  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:10.948772  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:08.834525  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:11.335287  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:12.949108  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.448393  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:13.834432  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:15.835251  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:18.334462  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:17.948235  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:19.948671  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:20.334833  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:22.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:21.948914  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:23.949013  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:24.335241  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:26.834599  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:28.948377  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:30.948441  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:29.334764  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:31.834659  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:32.948974  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:35.448453  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:33.835115  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:36.334527  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:37.448971  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:39.449007  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:38.834645  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.335647  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:41.948832  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.948861  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:46.448244  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:43.834536  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:45.835152  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.334898  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:48.448469  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.448941  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:50.335336  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.834828  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:52.948268  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:54.948294  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:55.334778  269883 pod_ready.go:102] pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 22:58:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:56.331712  269883 pod_ready.go:81] duration metric: took 4m0.0026135s waiting for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" ...
	E0701 23:14:56.331755  269883 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-mbfz4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:14:56.331779  269883 pod_ready.go:38] duration metric: took 4m0.007826908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:14:56.331809  269883 kubeadm.go:630] restartCluster took 4m10.917993696s
	W0701 23:14:56.331941  269883 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:14:56.331974  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:14:57.984431  269883 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.65243003s)
	I0701 23:14:57.984496  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:14:57.994269  269883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:14:58.001094  269883 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:14:58.001159  269883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:14:58.007683  269883 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:14:58.007734  269883 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:14:56.949272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:14:58.949543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:01.449627  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:03.950758  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.448470  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:06.936698  269883 out.go:204]   - Generating certificates and keys ...
	I0701 23:15:06.939424  269883 out.go:204]   - Booting up control plane ...
	I0701 23:15:06.941904  269883 out.go:204]   - Configuring RBAC rules ...
	I0701 23:15:06.944403  269883 cni.go:95] Creating CNI manager for ""
	I0701 23:15:06.944429  269883 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:15:06.945976  269883 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:15:06.947445  269883 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:15:06.951630  269883 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:15:06.951650  269883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:15:06.966756  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:15:07.699280  269883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:15:07.699401  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.699419  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=no-preload-20220701225718-10066 minikube.k8s.io/updated_at=2022_07_01T23_15_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:07.706386  269883 ops.go:34] apiserver oom_adj: -16
	I0701 23:15:07.765556  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.338006  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:08.448681  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:10.448820  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:08.838005  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.337996  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:09.837437  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.337629  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:10.837363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.337763  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:11.838075  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.338080  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.837649  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:13.337387  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:12.449226  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:14.948189  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:13.838035  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.337961  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:14.838063  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.338241  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:15.837500  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.337613  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:16.838363  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.337701  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:17.838061  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.337742  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:18.838306  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.337570  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.837680  269883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:15:19.892044  269883 kubeadm.go:1045] duration metric: took 12.192690701s to wait for elevateKubeSystemPrivileges.
	I0701 23:15:19.892072  269883 kubeadm.go:397] StartCluster complete in 4m34.521249474s
	I0701 23:15:19.892091  269883 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:19.892193  269883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:15:19.893038  269883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:15:20.407163  269883 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220701225718-10066" rescaled to 1
	I0701 23:15:20.407233  269883 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:15:20.409054  269883 out.go:177] * Verifying Kubernetes components...
	I0701 23:15:20.407277  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:15:20.407307  269883 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:15:20.407455  269883 config.go:178] Loaded profile config "no-preload-20220701225718-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:15:20.410261  269883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:15:20.410307  269883 addons.go:65] Setting dashboard=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410316  269883 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410322  269883 addons.go:65] Setting metrics-server=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410331  269883 addons.go:153] Setting addon dashboard=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.410333  269883 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220701225718-10066"
	W0701 23:15:20.410339  269883 addons.go:162] addon dashboard should already be in state true
	I0701 23:15:20.410339  269883 addons.go:153] Setting addon metrics-server=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410348  269883 addons.go:162] addon metrics-server should already be in state true
	I0701 23:15:20.410378  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410384  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410308  269883 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220701225718-10066"
	I0701 23:15:20.410415  269883 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220701225718-10066"
	W0701 23:15:20.410428  269883 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:15:20.410464  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.410690  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410883  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410898  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.410944  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.462647  269883 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.462859  269883 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220701225718-10066"
	I0701 23:15:20.464095  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:15:20.464150  269883 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:15:20.464162  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:15:20.464109  269883 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0701 23:15:20.464170  269883 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:15:20.465490  269883 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:15:20.466852  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:15:20.466866  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:15:20.465507  269883 host.go:66] Checking if "no-preload-20220701225718-10066" exists ...
	I0701 23:15:20.466910  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:16.948842  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:18.949526  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:21.448543  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:20.468347  269883 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.468364  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:15:20.468412  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.465559  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.467550  269883 cli_runner.go:164] Run: docker container inspect no-preload-20220701225718-10066 --format={{.State.Status}}
	I0701 23:15:20.497855  269883 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:15:20.497910  269883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:15:20.515144  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.520029  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.522289  269883 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.522310  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:15:20.522357  269883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220701225718-10066
	I0701 23:15:20.524783  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.568239  269883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/no-preload-20220701225718-10066/id_rsa Username:docker}
	I0701 23:15:20.635327  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:15:20.635528  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:15:20.635546  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:15:20.635773  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:15:20.635792  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:15:20.720153  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:15:20.720184  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:15:20.720330  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:15:20.720356  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:15:20.735914  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:15:20.735942  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:15:20.738036  269883 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.738058  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:15:20.751468  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:15:20.751494  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:15:20.751989  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:15:20.830998  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:15:20.831029  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:15:20.835184  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:15:20.919071  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:15:20.919097  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:15:20.931803  269883 start.go:809] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0701 23:15:20.938634  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:15:20.938663  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:15:21.027932  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:15:21.027961  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:15:21.120018  269883 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.120044  269883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:15:21.139289  269883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:15:21.542831  269883 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220701225718-10066"
	I0701 23:15:22.318204  269883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.178852341s)
	I0701 23:15:22.320260  269883 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0701 23:15:22.321764  269883 addons.go:414] enableAddons completed in 1.914474598s
	I0701 23:15:22.506049  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:23.449129  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.948784  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:25.003072  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:27.003942  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:28.448748  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:30.948490  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:29.503567  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:31.503801  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:33.448177  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:35.948336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:33.504159  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:35.504602  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:38.003422  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:37.948379  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:39.948560  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:40.504288  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:42.504480  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:41.949060  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:43.949319  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:46.449018  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:44.504514  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:47.002872  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:48.948340  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:51.448205  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:49.003639  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:51.503660  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:53.448249  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:55.448938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:15:53.503915  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:56.003212  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:58.003807  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:15:57.948938  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.448920  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:00.504360  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:03.003336  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:02.449149  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:04.449385  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:05.503324  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:07.503773  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:06.948721  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:09.448775  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:10.003039  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:12.003124  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:11.948462  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.448466  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:16.449003  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:14.504207  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:17.003682  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:18.948883  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:21.448510  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:19.503321  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:21.503670  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:23.949051  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:26.448494  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:23.504169  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:26.003440  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:28.448711  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:30.950336  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:28.503980  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:31.003131  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.003828  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:33.448272  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.448817  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:35.503530  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.503721  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:37.449097  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.948158  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:39.504219  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:42.002779  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:41.948654  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:43.948719  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:46.448800  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:44.003891  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:46.503378  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:48.948666  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:50.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:48.503897  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:51.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:53.448686  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:55.948675  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:53.504221  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:56.003927  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:16:58.448263  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:00.948090  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:16:58.503637  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:00.503665  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.504224  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:02.948518  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:04.948735  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:05.003494  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:07.503949  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:06.948781  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.448480  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:11.448536  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:09.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:12.003349  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:13.448566  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:15.948312  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:14.004090  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:16.503717  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:17.948940  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:20.449080  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:18.504348  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:21.002849  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:23.003827  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:22.948356  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:24.949063  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:25.503280  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:27.503458  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:26.949277  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.448968  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:29.503895  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:32.003296  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:31.948774  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:33.948802  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:36.448693  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:34.003684  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:36.504246  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:38.948200  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:41.449095  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:39.003597  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:41.504297  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:43.948596  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:46.448338  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:44.003653  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:46.003704  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:48.448406  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:50.449049  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:48.503830  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:51.002929  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:52.949418  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:55.448267  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:53.503901  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:56.003435  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:17:57.948337  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:59.949522  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:17:58.503409  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:00.504015  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.504075  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:02.449005  275844 pod_ready.go:102] pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-07-01 23:01:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0701 23:18:04.445635  275844 pod_ready.go:81] duration metric: took 4m0.002536043s waiting for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" ...
	E0701 23:18:04.445658  275844 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-6d4b75cb6d-zmnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0701 23:18:04.445676  275844 pod_ready.go:38] duration metric: took 4m0.00753476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 23:18:04.445715  275844 kubeadm.go:630] restartCluster took 4m10.861963713s
	W0701 23:18:04.445855  275844 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0701 23:18:04.445882  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0701 23:18:06.095490  275844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.649588457s)
	I0701 23:18:06.095547  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:06.104815  275844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 23:18:06.112334  275844 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0701 23:18:06.112376  275844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 23:18:06.119483  275844 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 23:18:06.119534  275844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 23:18:06.370658  275844 out.go:204]   - Generating certificates and keys ...
	I0701 23:18:05.003477  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.003973  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:07.277086  275844 out.go:204]   - Booting up control plane ...
	I0701 23:18:09.503332  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:11.504503  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:14.316275  275844 out.go:204]   - Configuring RBAC rules ...
	I0701 23:18:14.730162  275844 cni.go:95] Creating CNI manager for ""
	I0701 23:18:14.730189  275844 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 23:18:14.731634  275844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 23:18:14.732857  275844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 23:18:14.739597  275844 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0701 23:18:14.739622  275844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0701 23:18:14.825236  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 23:18:15.561507  275844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 23:18:15.561626  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.561637  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066 minikube.k8s.io/updated_at=2022_07_01T23_18_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:15.568394  275844 ops.go:34] apiserver oom_adj: -16
	I0701 23:18:15.634685  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:16.190642  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:14.002820  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.003780  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:16.690023  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.190952  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:17.690163  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.191022  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.690054  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.190723  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:19.690097  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.190968  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:20.691032  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:21.190434  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:18.503619  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:20.504289  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:23.003341  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:21.690038  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.190938  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:22.690621  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.190651  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:23.690833  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.190934  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:24.690962  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.190256  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:25.690333  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.190101  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:26.690887  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.190074  275844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 23:18:27.248216  275844 kubeadm.go:1045] duration metric: took 11.686670316s to wait for elevateKubeSystemPrivileges.
	I0701 23:18:27.248246  275844 kubeadm.go:397] StartCluster complete in 4m33.70628023s
	I0701 23:18:27.248264  275844 settings.go:142] acquiring lock: {Name:mk319951f11766fbe002e53432d5b04e4322851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.248355  275844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 23:18:27.249185  275844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 23:18:27.763199  275844 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220701230032-10066" rescaled to 1
	I0701 23:18:27.763267  275844 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0701 23:18:27.766618  275844 out.go:177] * Verifying Kubernetes components...
	I0701 23:18:27.763306  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 23:18:27.763330  275844 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0701 23:18:27.766747  275844 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766765  275844 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766778  275844 addons.go:162] addon storage-provisioner should already be in state true
	I0701 23:18:27.766806  275844 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766825  275844 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.766828  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.763473  275844 config.go:178] Loaded profile config "default-k8s-different-port-20220701230032-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 23:18:27.766824  275844 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768481  275844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 23:18:27.768504  275844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.766835  275844 addons.go:162] addon dashboard should already be in state true
	I0701 23:18:27.768632  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.766843  275844 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.768713  275844 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	W0701 23:18:27.768733  275844 addons.go:162] addon metrics-server should already be in state true
	I0701 23:18:27.768768  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.767332  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.768887  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769184  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.769187  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.831262  275844 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0701 23:18:27.832550  275844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 23:18:27.833969  275844 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:27.833992  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 23:18:27.834040  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.835526  275844 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0701 23:18:27.833023  275844 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:27.837673  275844 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0701 23:18:27.837677  275844 addons.go:162] addon default-storageclass should already be in state true
	I0701 23:18:25.003796  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.504253  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:27.837692  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 23:18:27.839084  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 23:18:27.839099  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 23:18:27.839108  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 23:18:27.839153  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.837723  275844 host.go:66] Checking if "default-k8s-different-port-20220701230032-10066" exists ...
	I0701 23:18:27.839164  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.839691  275844 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220701230032-10066 --format={{.State.Status}}
	I0701 23:18:27.856622  275844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:18:27.856645  275844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 23:18:27.890091  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.891200  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.895622  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:27.896930  275844 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:27.896946  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 23:18:27.896980  275844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220701230032-10066
	I0701 23:18:27.937496  275844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/default-k8s-different-port-20220701230032-10066/id_rsa Username:docker}
	I0701 23:18:28.136017  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 23:18:28.136703  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 23:18:28.139953  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 23:18:28.139977  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 23:18:28.144217  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 23:18:28.144239  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0701 23:18:28.234055  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 23:18:28.234083  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 23:18:28.318902  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 23:18:28.318936  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 23:18:28.336787  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 23:18:28.336818  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0701 23:18:28.423063  275844 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.423089  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 23:18:28.427844  275844 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0701 23:18:28.432989  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 23:18:28.433019  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0701 23:18:28.442227  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 23:18:28.523695  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 23:18:28.523727  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 23:18:28.618333  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 23:18:28.618365  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0701 23:18:28.636855  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 23:18:28.636885  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 23:18:28.652952  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 23:18:28.652974  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 23:18:28.739775  275844 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:28.739814  275844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 23:18:28.832453  275844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 23:18:29.251359  275844 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220701230032-10066"
	I0701 23:18:29.544427  275844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0701 23:18:29.545959  275844 addons.go:414] enableAddons completed in 1.78263451s
	I0701 23:18:29.863227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:30.003794  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:32.503813  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:31.863254  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.363382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:36.363413  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:34.504191  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:37.003581  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:38.363717  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:40.863294  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:39.504225  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.003356  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:42.863457  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:45.363613  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:44.003625  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:46.504247  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:47.863096  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.863849  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:49.003291  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:51.003453  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:52.363545  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:54.363732  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:53.504320  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.003487  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:18:56.862624  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.863111  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:00.863425  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:18:58.504264  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:00.504489  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.003398  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:03.363680  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.363957  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:05.004021  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.503771  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:07.364035  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:09.364588  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:10.003129  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:12.003382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:11.863661  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.362895  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:16.363322  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:14.504382  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:17.003939  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:19.503019  269883 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
	I0701 23:19:20.505831  269883 node_ready.go:38] duration metric: took 4m0.007935364s waiting for node "no-preload-20220701225718-10066" to be "Ready" ...
	I0701 23:19:20.507971  269883 out.go:177] 
	W0701 23:19:20.509514  269883 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:19:20.509536  269883 out.go:239] * 
	W0701 23:19:20.510312  269883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:19:20.511951  269883 out.go:177] 
	I0701 23:19:18.363478  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:20.863309  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:23.362826  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:25.363077  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:27.863010  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:29.863599  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:31.863690  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:34.363405  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:36.862844  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:39.363009  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:41.863136  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:43.863182  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:46.362920  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:48.363519  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:50.365995  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:52.863524  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:55.363287  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:57.363494  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:19:59.863734  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:02.362902  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:04.363417  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:06.863299  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:08.863390  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:11.363598  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:13.863329  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:16.363213  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:18.363246  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:20.862846  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:22.863412  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:25.363572  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:27.863611  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:29.863926  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:32.363408  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:34.363894  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:36.863454  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:39.363389  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:41.363918  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:43.364119  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:45.863734  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:48.363224  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:50.862933  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:52.863303  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:54.863540  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:57.363333  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:20:59.363619  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:01.863747  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:04.363462  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:06.863642  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:09.363229  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:11.863382  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:14.363453  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:16.363483  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:18.863559  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:20.863852  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:23.363579  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:25.863700  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:27.863820  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:30.363502  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:32.365183  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:34.862977  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:36.863647  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:39.363489  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:41.862636  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:43.863818  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:46.362854  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:48.363608  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:50.863761  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:53.363511  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:55.363792  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:21:57.863460  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:00.363227  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:02.863069  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:04.863654  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:06.863767  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:09.362775  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:11.363266  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:13.363390  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:15.863386  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:18.363719  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:20.363796  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:22.863167  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:24.863249  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:27.362843  275844 node_ready.go:58] node "default-k8s-different-port-20220701230032-10066" has status "Ready":"False"
	I0701 23:22:27.865277  275844 node_ready.go:38] duration metric: took 4m0.008613758s waiting for node "default-k8s-different-port-20220701230032-10066" to be "Ready" ...
	I0701 23:22:27.867660  275844 out.go:177] 
	W0701 23:22:27.869191  275844 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0701 23:22:27.869208  275844 out.go:239] * 
	W0701 23:22:27.869949  275844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 23:22:27.871815  275844 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4d788c7174e7c       6fb66cd78abfe       48 seconds ago      Running             kindnet-cni               4                   0870012c7fdbd
	285f970abe262       6fb66cd78abfe       4 minutes ago       Exited              kindnet-cni               3                   0870012c7fdbd
	eb11e24e69335       a634548d10b03       13 minutes ago      Running             kube-proxy                0                   8d6d8a26d2a9a
	4853e6fab716f       5d725196c1f47       13 minutes ago      Running             kube-scheduler            2                   c9495842f595f
	30f4a41daa330       aebe758cef4cd       13 minutes ago      Running             etcd                      2                   9416e3f200057
	63cbe08c42192       34cdf99b1bb3b       13 minutes ago      Running             kube-controller-manager   2                   f23084fa93a83
	dfbb7ffbbb3d0       d3377ffb7177c       13 minutes ago      Running             kube-apiserver            2                   7db62a183733b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2022-07-01 23:13:37 UTC, end at Fri 2022-07-01 23:31:31 UTC. --
	Jul 01 23:23:50 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:23:50.506931135Z" level=info msg="RemoveContainer for \"3ad6a11b29506a0cba58bb522457def9f974a3db06349f420ab56bfe697fe78c\" returns successfully"
	Jul 01 23:24:04 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:24:04.745713212Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jul 01 23:24:04 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:24:04.758557564Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"849b317c97581e1091abc46b089724626de5e2d7e0fb8ba0e908c02993c2adaa\""
	Jul 01 23:24:04 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:24:04.758974932Z" level=info msg="StartContainer for \"849b317c97581e1091abc46b089724626de5e2d7e0fb8ba0e908c02993c2adaa\""
	Jul 01 23:24:04 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:24:04.835004068Z" level=info msg="StartContainer for \"849b317c97581e1091abc46b089724626de5e2d7e0fb8ba0e908c02993c2adaa\" returns successfully"
	Jul 01 23:26:45 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:26:45.237933072Z" level=info msg="shim disconnected" id=849b317c97581e1091abc46b089724626de5e2d7e0fb8ba0e908c02993c2adaa
	Jul 01 23:26:45 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:26:45.238005984Z" level=warning msg="cleaning up after shim disconnected" id=849b317c97581e1091abc46b089724626de5e2d7e0fb8ba0e908c02993c2adaa namespace=k8s.io
	Jul 01 23:26:45 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:26:45.238020502Z" level=info msg="cleaning up dead shim"
	Jul 01 23:26:45 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:26:45.247162918Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4395 runtime=io.containerd.runc.v2\n"
	Jul 01 23:26:45 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:26:45.811403130Z" level=info msg="RemoveContainer for \"2fc852ec2cff3fd96bd143fafa811951fd218e7ab804c77f601ebd1ef3d80cb4\""
	Jul 01 23:26:45 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:26:45.816361754Z" level=info msg="RemoveContainer for \"2fc852ec2cff3fd96bd143fafa811951fd218e7ab804c77f601ebd1ef3d80cb4\" returns successfully"
	Jul 01 23:27:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:27:09.745824805Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jul 01 23:27:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:27:09.758162910Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a\""
	Jul 01 23:27:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:27:09.758783255Z" level=info msg="StartContainer for \"285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a\""
	Jul 01 23:27:09 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:27:09.829189765Z" level=info msg="StartContainer for \"285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a\" returns successfully"
	Jul 01 23:29:50 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:29:50.263534211Z" level=info msg="shim disconnected" id=285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a
	Jul 01 23:29:50 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:29:50.263600842Z" level=warning msg="cleaning up after shim disconnected" id=285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a namespace=k8s.io
	Jul 01 23:29:50 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:29:50.263614873Z" level=info msg="cleaning up dead shim"
	Jul 01 23:29:50 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:29:50.273025204Z" level=warning msg="cleanup warnings time=\"2022-07-01T23:29:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4504 runtime=io.containerd.runc.v2\n"
	Jul 01 23:29:51 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:29:51.139524216Z" level=info msg="RemoveContainer for \"849b317c97581e1091abc46b089724626de5e2d7e0fb8ba0e908c02993c2adaa\""
	Jul 01 23:29:51 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:29:51.144303375Z" level=info msg="RemoveContainer for \"849b317c97581e1091abc46b089724626de5e2d7e0fb8ba0e908c02993c2adaa\" returns successfully"
	Jul 01 23:30:42 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:30:42.745529292Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jul 01 23:30:42 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:30:42.758881479Z" level=info msg="CreateContainer within sandbox \"0870012c7fdbda9ad037dbb07661db0eaa03cbb4c173aace835d8dde5f4bf397\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"4d788c7174e7c11061b1394ea81b267fd794deeffe93a177cfd368dd597ac5e3\""
	Jul 01 23:30:42 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:30:42.759363201Z" level=info msg="StartContainer for \"4d788c7174e7c11061b1394ea81b267fd794deeffe93a177cfd368dd597ac5e3\""
	Jul 01 23:30:42 default-k8s-different-port-20220701230032-10066 containerd[394]: time="2022-07-01T23:30:42.833360728Z" level=info msg="StartContainer for \"4d788c7174e7c11061b1394ea81b267fd794deeffe93a177cfd368dd597ac5e3\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220701230032-10066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220701230032-10066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
	                    minikube.k8s.io/name=default-k8s-different-port-20220701230032-10066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_01T23_18_15_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 01 Jul 2022 23:18:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220701230032-10066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 01 Jul 2022 23:31:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 01 Jul 2022 23:28:36 +0000   Fri, 01 Jul 2022 23:18:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 01 Jul 2022 23:28:36 +0000   Fri, 01 Jul 2022 23:18:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 01 Jul 2022 23:28:36 +0000   Fri, 01 Jul 2022 23:18:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 01 Jul 2022 23:28:36 +0000   Fri, 01 Jul 2022 23:18:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220701230032-10066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873484Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                674fca36-2ebb-426c-b65b-bd78bdb510f5
	  Boot ID:                    a4927dcd-d031-4927-a8c8-2ea0f9a10287
	  Kernel Version:             5.15.0-1012-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.6
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220701230032-10066                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-g8hks                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220701230032-10066             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220701230032-10066    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-29k5c                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220701230032-10066             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node default-k8s-different-port-20220701230032-10066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m   node-controller  Node default-k8s-different-port-20220701230032-10066 event: Registered Node default-k8s-different-port-20220701230032-10066 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +1.002277] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +2.015803] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000000] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000004] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +4.255546] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  -0.000001] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000005] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000011] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000006] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +8.195166] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000007] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] IPv4: martian source 192.168.67.2 from 10.244.0.2, on dev br-c27aabdf32a4
	[  +0.000003] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 53 da e6 fe 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> etcd [30f4a41daa330161680cb83349451cab0de63a9e9ca0a9556f6b8d8b46ab9366] <==
	* {"level":"info","ts":"2022-07-01T23:18:08.447Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-01T23:18:08.448Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-01T23:18:08.448Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-01T23:18:08.448Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-01T23:18:08.448Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220701230032-10066 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:18:08.837Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:18:08.838Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-01T23:18:08.839Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-01T23:18:08.839Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-01T23:28:09.150Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":531}
	{"level":"info","ts":"2022-07-01T23:28:09.151Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":531,"took":"486.507µs"}
	
	* 
	* ==> kernel <==
	*  23:31:31 up  1:14,  0 users,  load average: 0.30, 0.26, 0.64
	Linux default-k8s-different-port-20220701230032-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [dfbb7ffbbb3d01ec5c655c33be14c60a9a6f2957fe3162cd96a537536351e36c] <==
	* W0701 23:26:12.570731       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:26:12.570801       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:26:12.570820       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:28:12.573634       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:28:12.573668       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:28:12.573675       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:28:12.573691       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:28:12.573752       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:28:12.574715       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:29:12.573896       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:29:12.573942       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:29:12.573950       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:29:12.575062       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:29:12.575104       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:29:12.575112       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:31:12.574446       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:31:12.574501       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0701 23:31:12.574512       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 23:31:12.575553       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 23:31:12.575612       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 23:31:12.575624       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [63cbe08c42192f8a68dc4ad6bf2d9244cfb450753c00dd820632e96f48873cdf] <==
	* W0701 23:25:27.363189       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:25:56.938130       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:25:57.378921       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:26:26.948874       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:26:27.394161       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:26:56.959865       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:26:57.410474       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:27:26.969834       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:27:27.423207       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:27:56.977715       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:27:57.435947       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:28:26.986753       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:28:27.450174       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:28:56.995678       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:28:57.465192       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:29:27.028326       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:29:27.478993       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:29:57.044505       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:29:57.493571       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:30:27.055630       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:30:27.507832       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:30:57.071686       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:30:57.522074       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 23:31:27.082095       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0701 23:31:27.537969       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [eb11e24e693352d61f216279f67ca374a059eabd7c14e7969d0c8e9b21761c31] <==
	* I0701 23:18:28.434637       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0701 23:18:28.434719       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0701 23:18:28.434760       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0701 23:18:28.538003       1 server_others.go:206] "Using iptables Proxier"
	I0701 23:18:28.538045       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0701 23:18:28.538060       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0701 23:18:28.538084       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0701 23:18:28.538120       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:18:28.538295       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0701 23:18:28.538531       1 server.go:661] "Version info" version="v1.24.2"
	I0701 23:18:28.538597       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 23:18:28.539177       1 config.go:317] "Starting service config controller"
	I0701 23:18:28.539211       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0701 23:18:28.539221       1 config.go:226] "Starting endpoint slice config controller"
	I0701 23:18:28.539229       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0701 23:18:28.539329       1 config.go:444] "Starting node config controller"
	I0701 23:18:28.539355       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0701 23:18:28.639445       1 shared_informer.go:262] Caches are synced for node config
	I0701 23:18:28.639448       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0701 23:18:28.639497       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [4853e6fab716f8af39f331aabb0fc7d89198fa1cc48add3023586165da7b294e] <==
	* E0701 23:18:11.629897       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 23:18:11.629954       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 23:18:11.629929       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 23:18:11.629977       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 23:18:11.630053       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 23:18:11.630075       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 23:18:11.630268       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 23:18:11.630288       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 23:18:11.630350       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:18:11.630391       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:18:12.451805       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 23:18:12.451837       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 23:18:12.464971       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 23:18:12.465005       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 23:18:12.531235       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 23:18:12.531262       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 23:18:12.549388       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 23:18:12.549421       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 23:18:12.617871       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 23:18:12.617906       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 23:18:12.623982       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 23:18:12.624020       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 23:18:12.770200       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 23:18:12.770239       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0701 23:18:14.927748       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2022-07-01 23:13:37 UTC, end at Fri 2022-07-01 23:31:31 UTC. --
	Jul 01 23:30:02 default-k8s-different-port-20220701230032-10066 kubelet[3059]: I0701 23:30:02.742990    3059 scope.go:110] "RemoveContainer" containerID="285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a"
	Jul 01 23:30:02 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:02.743448    3059 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-g8hks_kube-system(14b30c49-6a5a-4bb2-8b30-0731e8fc2a23)\"" pod="kube-system/kindnet-g8hks" podUID=14b30c49-6a5a-4bb2-8b30-0731e8fc2a23
	Jul 01 23:30:05 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:05.118814    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:10 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:10.120297    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:15 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:15.121761    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:15 default-k8s-different-port-20220701230032-10066 kubelet[3059]: I0701 23:30:15.743168    3059 scope.go:110] "RemoveContainer" containerID="285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a"
	Jul 01 23:30:15 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:15.743496    3059 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-g8hks_kube-system(14b30c49-6a5a-4bb2-8b30-0731e8fc2a23)\"" pod="kube-system/kindnet-g8hks" podUID=14b30c49-6a5a-4bb2-8b30-0731e8fc2a23
	Jul 01 23:30:20 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:20.123489    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:25 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:25.124625    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:27 default-k8s-different-port-20220701230032-10066 kubelet[3059]: I0701 23:30:27.742868    3059 scope.go:110] "RemoveContainer" containerID="285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a"
	Jul 01 23:30:27 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:27.743157    3059 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-g8hks_kube-system(14b30c49-6a5a-4bb2-8b30-0731e8fc2a23)\"" pod="kube-system/kindnet-g8hks" podUID=14b30c49-6a5a-4bb2-8b30-0731e8fc2a23
	Jul 01 23:30:30 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:30.125453    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:35 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:35.126650    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:40 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:40.127829    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:42 default-k8s-different-port-20220701230032-10066 kubelet[3059]: I0701 23:30:42.742810    3059 scope.go:110] "RemoveContainer" containerID="285f970abe26297e49c74c761d0780b1618dc7b56bc46d5f74ee989eeb67b58a"
	Jul 01 23:30:45 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:45.128816    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:50 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:50.130287    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:30:55 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:30:55.131529    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:31:00 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:31:00.132330    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:31:05 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:31:05.133388    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:31:10 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:31:10.134319    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:31:15 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:31:15.136098    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:31:20 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:31:20.137314    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:31:25 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:31:25.138898    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jul 01 23:31:30 default-k8s-different-port-20220701230032-10066 kubelet[3059]: E0701 23:31:30.140062    3059 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-wfcgh metrics-server-5c6f97fb75-k9568 storage-provisioner dashboard-metrics-scraper-dffd48c4c-lnpcv kubernetes-dashboard-5fd5574d9f-7klw9
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-wfcgh metrics-server-5c6f97fb75-k9568 storage-provisioner dashboard-metrics-scraper-dffd48c4c-lnpcv kubernetes-dashboard-5fd5574d9f-7klw9
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-wfcgh metrics-server-5c6f97fb75-k9568 storage-provisioner dashboard-metrics-scraper-dffd48c4c-lnpcv kubernetes-dashboard-5fd5574d9f-7klw9: exit status 1 (54.042182ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-wfcgh" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-k9568" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-lnpcv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-7klw9" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220701230032-10066 describe pod coredns-6d4b75cb6d-wfcgh metrics-server-5c6f97fb75-k9568 storage-provisioner dashboard-metrics-scraper-dffd48c4c-lnpcv kubernetes-dashboard-5fd5574d9f-7klw9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.29s)

                                                
                                    

Test pass (247/279)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.17
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.24.2/json-events 8.73
11 TestDownloadOnly/v1.24.2/preload-exists 0
15 TestDownloadOnly/v1.24.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.37
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
18 TestDownloadOnlyKic 2.94
19 TestBinaryMirror 0.88
20 TestOffline 71.88
22 TestAddons/Setup 104.09
24 TestAddons/parallel/Registry 14.96
25 TestAddons/parallel/Ingress 24.36
26 TestAddons/parallel/MetricsServer 5.47
27 TestAddons/parallel/HelmTiller 12.66
29 TestAddons/parallel/CSI 41.36
30 TestAddons/parallel/Headlamp 8.94
32 TestAddons/serial/GCPAuth 38.52
33 TestAddons/StoppedEnableDisable 20.33
34 TestCertOptions 39.52
35 TestCertExpiration 252.06
37 TestForceSystemdFlag 37.31
38 TestForceSystemdEnv 59.56
39 TestKVMDriverInstallOrUpdate 2.42
43 TestErrorSpam/setup 23.46
44 TestErrorSpam/start 0.97
45 TestErrorSpam/status 1.15
46 TestErrorSpam/pause 1.64
47 TestErrorSpam/unpause 1.6
48 TestErrorSpam/stop 20.36
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 56.15
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 15.43
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.16
59 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
60 TestFunctional/serial/CacheCmd/cache/add_local 1.96
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
62 TestFunctional/serial/CacheCmd/cache/list 0.07
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
65 TestFunctional/serial/CacheCmd/cache/delete 0.13
66 TestFunctional/serial/MinikubeKubectlCmd 0.12
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
68 TestFunctional/serial/ExtraConfig 34.59
69 TestFunctional/serial/ComponentHealth 0.06
70 TestFunctional/serial/LogsCmd 1.07
71 TestFunctional/serial/LogsFileCmd 1.12
73 TestFunctional/parallel/ConfigCmd 0.54
74 TestFunctional/parallel/DashboardCmd 13.24
75 TestFunctional/parallel/DryRun 0.68
76 TestFunctional/parallel/InternationalLanguage 0.95
77 TestFunctional/parallel/StatusCmd 1.24
80 TestFunctional/parallel/ServiceCmd 22.8
81 TestFunctional/parallel/ServiceCmdConnect 19.77
82 TestFunctional/parallel/AddonsCmd 0.24
83 TestFunctional/parallel/PersistentVolumeClaim 34.66
85 TestFunctional/parallel/SSHCmd 0.88
86 TestFunctional/parallel/CpCmd 2.03
87 TestFunctional/parallel/MySQL 25.81
88 TestFunctional/parallel/FileSync 0.46
89 TestFunctional/parallel/CertSync 2.49
93 TestFunctional/parallel/NodeLabels 0.07
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.87
97 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
98 TestFunctional/parallel/Version/short 0.07
99 TestFunctional/parallel/Version/components 0.58
100 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
101 TestFunctional/parallel/ImageCommands/ImageListTable 0.36
102 TestFunctional/parallel/ImageCommands/ImageListJson 0.47
103 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
104 TestFunctional/parallel/ImageCommands/ImageBuild 4.49
105 TestFunctional/parallel/ImageCommands/Setup 1.06
106 TestFunctional/parallel/ProfileCmd/profile_list 0.54
107 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
108 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
109 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
110 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.23
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.1
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.43
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.09
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/MountCmd/any-port 12.04
125 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.79
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.17
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.11
129 TestFunctional/parallel/MountCmd/specific-port 2.27
130 TestFunctional/delete_addon-resizer_images 0.1
131 TestFunctional/delete_my-image_image 0.03
132 TestFunctional/delete_minikube_cached_images 0.03
135 TestIngressAddonLegacy/StartLegacyK8sCluster 75.3
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.18
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
139 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.47
142 TestJSONOutput/start/Command 45.75
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.69
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.63
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 20.23
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.31
167 TestKicCustomNetwork/create_custom_network 34.57
168 TestKicCustomNetwork/use_default_bridge_network 30.03
169 TestKicExistingNetwork 30.87
170 TestKicCustomSubnet 29.59
171 TestMainNoArgs 0.06
172 TestMinikubeProfile 53.06
175 TestMountStart/serial/StartWithMountFirst 5.18
176 TestMountStart/serial/VerifyMountFirst 0.34
177 TestMountStart/serial/StartWithMountSecond 4.85
178 TestMountStart/serial/VerifyMountSecond 0.35
179 TestMountStart/serial/DeleteFirst 1.81
180 TestMountStart/serial/VerifyMountPostDelete 0.35
181 TestMountStart/serial/Stop 1.27
182 TestMountStart/serial/RestartStopped 6.44
183 TestMountStart/serial/VerifyMountPostStop 0.35
186 TestMultiNode/serial/FreshStart2Nodes 92.58
187 TestMultiNode/serial/DeployApp2Nodes 3.17
188 TestMultiNode/serial/PingHostFrom2Pods 0.87
189 TestMultiNode/serial/AddNode 39.57
190 TestMultiNode/serial/ProfileList 0.39
191 TestMultiNode/serial/CopyFile 12.48
192 TestMultiNode/serial/StopNode 2.49
193 TestMultiNode/serial/StartAfterStop 31.09
194 TestMultiNode/serial/RestartKeepsNodes 155.93
195 TestMultiNode/serial/DeleteNode 5.13
196 TestMultiNode/serial/StopMultiNode 40.32
197 TestMultiNode/serial/RestartMultiNode 83.22
198 TestMultiNode/serial/ValidateNameConflict 26.55
203 TestPreload 115.31
205 TestScheduledStopUnix 100.76
208 TestInsufficientStorage 16.6
209 TestRunningBinaryUpgrade 94.56
212 TestMissingContainerUpgrade 144.49
213 TestStoppedBinaryUpgrade/Setup 1.25
215 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
216 TestNoKubernetes/serial/StartWithK8s 46.83
217 TestStoppedBinaryUpgrade/Upgrade 128.94
218 TestNoKubernetes/serial/StartWithStopK8s 18.23
219 TestNoKubernetes/serial/Start 6.74
220 TestNoKubernetes/serial/VerifyK8sNotRunning 0.53
221 TestNoKubernetes/serial/ProfileList 2.69
222 TestNoKubernetes/serial/Stop 1.98
223 TestNoKubernetes/serial/StartNoArgs 6.41
224 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
232 TestNetworkPlugins/group/false 0.82
236 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
245 TestPause/serial/Start 62.14
246 TestPause/serial/SecondStartNoReconfiguration 16.14
247 TestPause/serial/Pause 0.82
248 TestPause/serial/VerifyStatus 0.44
249 TestPause/serial/Unpause 0.79
250 TestPause/serial/PauseAgain 0.89
251 TestPause/serial/DeletePaused 2.86
252 TestPause/serial/VerifyDeletedResources 2.48
253 TestNetworkPlugins/group/auto/Start 58.6
254 TestNetworkPlugins/group/kindnet/Start 49.75
255 TestNetworkPlugins/group/calico/Start 68.54
256 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
257 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
258 TestNetworkPlugins/group/kindnet/NetCatPod 9.36
259 TestNetworkPlugins/group/auto/KubeletFlags 0.48
260 TestNetworkPlugins/group/auto/NetCatPod 9.32
261 TestNetworkPlugins/group/kindnet/DNS 0.14
262 TestNetworkPlugins/group/kindnet/Localhost 0.13
263 TestNetworkPlugins/group/kindnet/HairPin 0.12
264 TestNetworkPlugins/group/auto/DNS 0.14
265 TestNetworkPlugins/group/auto/Localhost 0.12
266 TestNetworkPlugins/group/auto/HairPin 0.15
267 TestNetworkPlugins/group/enable-default-cni/Start 58.9
268 TestNetworkPlugins/group/bridge/Start 42.51
269 TestNetworkPlugins/group/calico/ControllerPod 5.02
270 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
271 TestNetworkPlugins/group/calico/KubeletFlags 0.42
272 TestNetworkPlugins/group/bridge/NetCatPod 9.2
273 TestNetworkPlugins/group/calico/NetCatPod 8.25
274 TestNetworkPlugins/group/calico/DNS 0.13
275 TestNetworkPlugins/group/calico/Localhost 0.12
276 TestNetworkPlugins/group/calico/HairPin 0.1
277 TestNetworkPlugins/group/bridge/DNS 0.16
278 TestNetworkPlugins/group/bridge/Localhost 0.14
279 TestNetworkPlugins/group/bridge/HairPin 0.13
280 TestNetworkPlugins/group/cilium/Start 72.49
282 TestStartStop/group/old-k8s-version/serial/FirstStart 338.36
283 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.48
284 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.2
285 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
286 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
287 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
290 TestNetworkPlugins/group/cilium/ControllerPod 5.02
291 TestNetworkPlugins/group/cilium/KubeletFlags 0.38
292 TestNetworkPlugins/group/cilium/NetCatPod 9.81
293 TestNetworkPlugins/group/cilium/DNS 0.14
294 TestNetworkPlugins/group/cilium/Localhost 0.11
295 TestNetworkPlugins/group/cilium/HairPin 0.11
297 TestStartStop/group/embed-certs/serial/FirstStart 57.24
298 TestStartStop/group/embed-certs/serial/DeployApp 9.32
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.6
300 TestStartStop/group/embed-certs/serial/Stop 20.15
301 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
302 TestStartStop/group/embed-certs/serial/SecondStart 322.64
306 TestStartStop/group/old-k8s-version/serial/DeployApp 8.31
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.56
308 TestStartStop/group/old-k8s-version/serial/Stop 20.17
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
310 TestStartStop/group/old-k8s-version/serial/SecondStart 628.77
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
314 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
315 TestStartStop/group/embed-certs/serial/Pause 3.32
317 TestStartStop/group/newest-cni/serial/FirstStart 35.84
318 TestStartStop/group/newest-cni/serial/DeployApp 0
319 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.55
320 TestStartStop/group/newest-cni/serial/Stop 20.15
321 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
322 TestStartStop/group/newest-cni/serial/SecondStart 29.5
323 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
325 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
326 TestStartStop/group/newest-cni/serial/Pause 3.09
327 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.59
328 TestStartStop/group/no-preload/serial/Stop 20.11
329 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
331 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.62
332 TestStartStop/group/default-k8s-different-port/serial/Stop 20.16
333 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.24
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
337 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
338 TestStartStop/group/old-k8s-version/serial/Pause 3.02
x
+
TestDownloadOnly/v1.16.0/json-events (6.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220701222330-10066 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220701222330-10066 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.169155527s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220701222330-10066
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220701222330-10066: exit status 85 (79.0729ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| Command |                Args                | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 01 Jul 22 22:23 UTC |          |
	|         | download-only-20220701222330-10066 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 22:23:30
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 22:23:30.811082   10078 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:23:30.811196   10078 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:23:30.811206   10078 out.go:309] Setting ErrFile to fd 2...
	I0701 22:23:30.811210   10078 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:23:30.811622   10078 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	W0701 22:23:30.811728   10078 root.go:307] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/config/config.json: no such file or directory
	I0701 22:23:30.811974   10078 out.go:303] Setting JSON to true
	I0701 22:23:30.812704   10078 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":364,"bootTime":1656713847,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 22:23:30.812759   10078 start.go:125] virtualization: kvm guest
	I0701 22:23:30.816105   10078 out.go:97] [download-only-20220701222330-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 22:23:30.816203   10078 notify.go:193] Checking for updates...
	W0701 22:23:30.816240   10078 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball: no such file or directory
	I0701 22:23:30.817844   10078 out.go:169] MINIKUBE_LOCATION=14483
	I0701 22:23:30.819226   10078 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 22:23:30.820535   10078 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:23:30.821884   10078 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 22:23:30.823201   10078 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0701 22:23:30.825451   10078 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 22:23:30.825614   10078 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 22:23:30.861573   10078 docker.go:137] docker version: linux-20.10.17
	I0701 22:23:30.861631   10078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:23:31.591310   10078 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-07-01 22:23:30.88808892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:23:31.591425   10078 docker.go:254] overlay module found
	I0701 22:23:31.593330   10078 out.go:97] Using the docker driver based on user configuration
	I0701 22:23:31.593356   10078 start.go:284] selected driver: docker
	I0701 22:23:31.593365   10078 start.go:808] validating driver "docker" against <nil>
	I0701 22:23:31.593459   10078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:23:31.696445   10078 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-07-01 22:23:31.620788339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:23:31.696553   10078 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0701 22:23:31.697032   10078 start_flags.go:377] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0701 22:23:31.697126   10078 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 22:23:31.699192   10078 out.go:169] Using Docker driver with root privileges
	I0701 22:23:31.700543   10078 cni.go:95] Creating CNI manager for ""
	I0701 22:23:31.700566   10078 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 22:23:31.700585   10078 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0701 22:23:31.700599   10078 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0701 22:23:31.700604   10078 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 22:23:31.700616   10078 start_flags.go:310] config:
	{Name:download-only-20220701222330-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220701222330-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:23:31.702197   10078 out.go:97] Starting control plane node download-only-20220701222330-10066 in cluster download-only-20220701222330-10066
	I0701 22:23:31.702216   10078 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 22:23:31.703561   10078 out.go:97] Pulling base image ...
	I0701 22:23:31.703586   10078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0701 22:23:31.703715   10078 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 22:23:31.734254   10078 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0701 22:23:31.734574   10078 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory
	I0701 22:23:31.734688   10078 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0701 22:23:31.764991   10078 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0701 22:23:31.765013   10078 cache.go:57] Caching tarball of preloaded images
	I0701 22:23:31.765181   10078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0701 22:23:31.767382   10078 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0701 22:23:31.767405   10078 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0701 22:23:31.830087   10078 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0701 22:23:34.397599   10078 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0701 22:23:34.397669   10078 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0701 22:23:35.281685   10078 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0701 22:23:35.281985   10078 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/download-only-20220701222330-10066/config.json ...
	I0701 22:23:35.282022   10078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/download-only-20220701222330-10066/config.json: {Name:mkd6add4887391c9050b5e068a197b5fef2d84ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 22:23:35.282232   10078 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0701 22:23:35.282484   10078 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220701222330-10066"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/json-events (8.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220701222330-10066 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220701222330-10066 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.730044177s)
--- PASS: TestDownloadOnly/v1.24.2/json-events (8.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/preload-exists
--- PASS: TestDownloadOnly/v1.24.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220701222330-10066
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220701222330-10066: exit status 85 (79.019582ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| Command |                Args                | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 01 Jul 22 22:23 UTC |          |
	|         | download-only-20220701222330-10066 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 01 Jul 22 22:23 UTC |          |
	|         | download-only-20220701222330-10066 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.24.2       |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/01 22:23:37
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 22:23:37.059605   10246 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:23:37.059740   10246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:23:37.059751   10246 out.go:309] Setting ErrFile to fd 2...
	I0701 22:23:37.059755   10246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:23:37.060136   10246 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	W0701 22:23:37.060237   10246 root.go:307] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/config/config.json: no such file or directory
	I0701 22:23:37.060333   10246 out.go:303] Setting JSON to true
	I0701 22:23:37.061065   10246 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":370,"bootTime":1656713847,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 22:23:37.061124   10246 start.go:125] virtualization: kvm guest
	I0701 22:23:37.063425   10246 out.go:97] [download-only-20220701222330-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 22:23:37.063548   10246 notify.go:193] Checking for updates...
	I0701 22:23:37.065288   10246 out.go:169] MINIKUBE_LOCATION=14483
	I0701 22:23:37.066882   10246 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 22:23:37.068405   10246 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:23:37.069949   10246 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 22:23:37.071289   10246 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0701 22:23:37.074276   10246 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 22:23:37.074695   10246 config.go:178] Loaded profile config "download-only-20220701222330-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0701 22:23:37.074760   10246 start.go:716] api.Load failed for download-only-20220701222330-10066: filestore "download-only-20220701222330-10066": Docker machine "download-only-20220701222330-10066" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0701 22:23:37.074814   10246 driver.go:360] Setting default libvirt URI to qemu:///system
	W0701 22:23:37.074862   10246 start.go:716] api.Load failed for download-only-20220701222330-10066: filestore "download-only-20220701222330-10066": Docker machine "download-only-20220701222330-10066" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0701 22:23:37.111690   10246 docker.go:137] docker version: linux-20.10.17
	I0701 22:23:37.111781   10246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:23:37.215156   10246 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-07-01 22:23:37.13757643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:23:37.215320   10246 docker.go:254] overlay module found
	I0701 22:23:37.217180   10246 out.go:97] Using the docker driver based on existing profile
	I0701 22:23:37.217200   10246 start.go:284] selected driver: docker
	I0701 22:23:37.217210   10246 start.go:808] validating driver "docker" against &{Name:download-only-20220701222330-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220701222330-10066 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:23:37.217372   10246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:23:37.315810   10246 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-07-01 22:23:37.244744663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:23:37.316321   10246 cni.go:95] Creating CNI manager for ""
	I0701 22:23:37.316336   10246 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0701 22:23:37.316347   10246 start_flags.go:310] config:
	{Name:download-only-20220701222330-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:download-only-20220701222330-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:23:37.318232   10246 out.go:97] Starting control plane node download-only-20220701222330-10066 in cluster download-only-20220701222330-10066
	I0701 22:23:37.318268   10246 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0701 22:23:37.319840   10246 out.go:97] Pulling base image ...
	I0701 22:23:37.319867   10246 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 22:23:37.319898   10246 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0701 22:23:37.347870   10246 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0701 22:23:37.348107   10246 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory
	I0701 22:23:37.348129   10246 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory, skipping pull
	I0701 22:23:37.348133   10246 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in cache, skipping pull
	I0701 22:23:37.348146   10246 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e as a tarball
	I0701 22:23:37.381191   10246 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.2/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0701 22:23:37.381213   10246 cache.go:57] Caching tarball of preloaded images
	I0701 22:23:37.381369   10246 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0701 22:23:37.383370   10246 out.go:97] Downloading Kubernetes v1.24.2 preload ...
	I0701 22:23:37.383393   10246 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 ...
	I0701 22:23:37.444855   10246 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.2/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:ea4039edb2e481b1845a8b624da36527 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220701222330-10066"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220701222330-10066
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.94s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220701222346-10066 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220701222346-10066 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (1.836839262s)
helpers_test.go:175: Cleaning up "download-docker-20220701222346-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220701222346-10066
--- PASS: TestDownloadOnlyKic (2.94s)

                                                
                                    
x
+
TestBinaryMirror (0.88s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220701222349-10066 --alsologtostderr --binary-mirror http://127.0.0.1:44347 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220701222349-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220701222349-10066
--- PASS: TestBinaryMirror (0.88s)

                                                
                                    
x
+
TestOffline (71.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220701224953-10066 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220701224953-10066 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m8.777152007s)
helpers_test.go:175: Cleaning up "offline-containerd-20220701224953-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220701224953-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220701224953-10066: (3.103490122s)
--- PASS: TestOffline (71.88s)

                                                
                                    
x
+
TestAddons/Setup (104.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220701222350-10066 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220701222350-10066 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m44.08668793s)
--- PASS: TestAddons/Setup (104.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 14.282523ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-jbcb9" [e2bbc0c2-0c81-4d62-97e2-4b378c591d75] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00688024s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-gs86c" [8c4d4409-6e72-4617-8246-a563f96dee33] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007566057s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220701222350-10066 delete po -l run=registry-test --now
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220701222350-10066 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220701222350-10066 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.132126981s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 ip
2022/07/01 22:25:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:340: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220701222350-10066 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Done: kubectl --context addons-20220701222350-10066 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.891618193s)
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220701222350-10066 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220701222350-10066 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [a6580895-d1a3-4e82-8b60-05debe22c93f] Pending
helpers_test.go:342: "nginx" [a6580895-d1a3-4e82-8b60-05debe22c93f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [a6580895-d1a3-4e82-8b60-05debe22c93f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.007216911s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context addons-20220701222350-10066 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable ingress-dns --alsologtostderr -v=1: (1.102100822s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable ingress --alsologtostderr -v=1: (7.498322403s)
--- PASS: TestAddons/parallel/Ingress (24.36s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 11.520337ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-8595bd7d4c-bzvfl" [085d137a-66b3-4526-be80-6065178b5dc0] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00813215s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220701222350-10066 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.66s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 1.985744ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-pbdp5" [cb08544e-946a-4aad-916a-343e20ae6eba] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007598136s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220701222350-10066 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220701222350-10066 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.287091361s)
addons_test.go:430: kubectl --context addons-20220701222350-10066 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:442: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 12.174123ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220701222350-10066 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220701222350-10066 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220701222350-10066 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220701222350-10066 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [65ea8f4e-6e95-4940-ac95-63c810d03ed0] Pending
helpers_test.go:342: "task-pv-pod" [65ea8f4e-6e95-4940-ac95-63c810d03ed0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [65ea8f4e-6e95-4940-ac95-63c810d03ed0] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.008083597s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220701222350-10066 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220701222350-10066 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220701222350-10066 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220701222350-10066 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220701222350-10066 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220701222350-10066 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220701222350-10066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220701222350-10066 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [1ef07210-eb00-442c-b736-b73d860ec618] Pending
helpers_test.go:342: "task-pv-pod-restore" [1ef07210-eb00-442c-b736-b73d860ec618] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [1ef07210-eb00-442c-b736-b73d860ec618] Running
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.007347986s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220701222350-10066 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220701222350-10066 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220701222350-10066 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.860070762s)
addons_test.go:594: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (8.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-20220701222350-10066 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-r7mgw" [35112932-08fe-4021-9d1b-67a7b9181fb3] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-r7mgw" [35112932-08fe-4021-9d1b-67a7b9181fb3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-r7mgw" [35112932-08fe-4021-9d1b-67a7b9181fb3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.006881198s
--- PASS: TestAddons/parallel/Headlamp (8.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (38.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220701222350-10066 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220701222350-10066 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2b0c24b1-1756-4795-856c-a87cbd0ccbb8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [2b0c24b1-1756-4795-856c-a87cbd0ccbb8] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 7.005706656s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220701222350-10066 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220701222350-10066 describe sa gcp-auth-test
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220701222350-10066 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-linux-amd64 -p addons-20220701222350-10066 addons disable gcp-auth --alsologtostderr -v=1: (6.140979621s)
addons_test.go:703: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220701222350-10066 addons enable gcp-auth
addons_test.go:703: (dbg) Done: out/minikube-linux-amd64 -p addons-20220701222350-10066 addons enable gcp-auth: (2.173778737s)
addons_test.go:709: (dbg) Run:  kubectl --context addons-20220701222350-10066 apply -f testdata/private-image.yaml
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7c74db7cd9-rmrx5" [4719e89f-dca8-40f1-96bb-5adc5a37c742] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7c74db7cd9-rmrx5" [4719e89f-dca8-40f1-96bb-5adc5a37c742] Running
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 12.005654373s
addons_test.go:722: (dbg) Run:  kubectl --context addons-20220701222350-10066 apply -f testdata/private-image-eu.yaml
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-545d57c67f-mvrpx" [8af220a3-19bd-4c94-a313-006d7ac00e69] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-545d57c67f-mvrpx" [8af220a3-19bd-4c94-a313-006d7ac00e69] Running
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.005464351s
--- PASS: TestAddons/serial/GCPAuth (38.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220701222350-10066
addons_test.go:134: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220701222350-10066: (20.127611159s)
addons_test.go:138: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220701222350-10066
addons_test.go:142: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220701222350-10066
--- PASS: TestAddons/StoppedEnableDisable (20.33s)

                                                
                                    
x
+
TestCertOptions (39.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220701225244-10066 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220701225244-10066 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.187215311s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220701225244-10066 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220701225244-10066 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220701225244-10066 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220701225244-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220701225244-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220701225244-10066: (2.473484979s)
--- PASS: TestCertOptions (39.52s)

                                                
                                    
x
+
TestCertExpiration (252.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220701225121-10066 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0701 22:51:36.554664   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220701225121-10066 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (54.397728711s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220701225121-10066 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220701225121-10066 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (15.231630827s)
helpers_test.go:175: Cleaning up "cert-expiration-20220701225121-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220701225121-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220701225121-10066: (2.43282914s)
--- PASS: TestCertExpiration (252.06s)

                                                
                                    
x
+
TestForceSystemdFlag (37.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220701225207-10066 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220701225207-10066 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.629759021s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220701225207-10066 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220701225207-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220701225207-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220701225207-10066: (2.312805538s)
--- PASS: TestForceSystemdFlag (37.31s)

                                                
                                    
x
+
TestForceSystemdEnv (59.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220701224953-10066 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220701224953-10066 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (56.670482898s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220701224953-10066 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220701224953-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220701224953-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220701224953-10066: (2.515496474s)
--- PASS: TestForceSystemdEnv (59.56s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.42s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.42s)

                                                
                                    
x
+
TestErrorSpam/setup (23.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220701222720-10066 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220701222720-10066 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220701222720-10066 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220701222720-10066 --driver=docker  --container-runtime=containerd: (23.457029745s)
--- PASS: TestErrorSpam/setup (23.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 start --dry-run
--- PASS: TestErrorSpam/start (0.97s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (20.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 stop: (20.089515549s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220701222720-10066 --log_dir /tmp/nospam-20220701222720-10066 stop
--- PASS: TestErrorSpam/stop (20.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/test/nested/copy/10066/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220701222815-10066 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220701222815-10066 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (56.154142769s)
--- PASS: TestFunctional/serial/StartWithProxy (56.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220701222815-10066 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220701222815-10066 --alsologtostderr -v=8: (15.425367474s)
functional_test.go:655: soft start took 15.426059342s for "functional-20220701222815-10066" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220701222815-10066 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 cache add k8s.gcr.io/pause:3.1: (1.024654997s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 cache add k8s.gcr.io/pause:3.3: (1.138104805s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220701222815-10066 /tmp/TestFunctionalserialCacheCmdcacheadd_local24141193/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 cache add minikube-local-cache-test:functional-20220701222815-10066
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 cache add minikube-local-cache-test:functional-20220701222815-10066: (1.68167445s)
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 cache delete minikube-local-cache-test:functional-20220701222815-10066
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220701222815-10066
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (361.625473ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 cache reload: (1.05748948s)
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 kubectl -- --context functional-20220701222815-10066 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220701222815-10066 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220701222815-10066 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220701222815-10066 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.588433681s)
functional_test.go:753: restart took 34.588555363s for "functional-20220701222815-10066" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220701222815-10066 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 logs
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 logs: (1.067358701s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 logs --file /tmp/TestFunctionalserialLogsFileCmd3351860334/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 logs --file /tmp/TestFunctionalserialLogsFileCmd3351860334/001/logs.txt: (1.122517599s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 config get cpus: exit status 14 (90.82839ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 config get cpus: exit status 14 (80.428904ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220701222815-10066 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220701222815-10066 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 46361: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220701222815-10066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220701222815-10066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (306.865977ms)

                                                
                                                
-- stdout --
	* [functional-20220701222815-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 22:30:38.117720   45343 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:30:38.117868   45343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:30:38.117881   45343 out.go:309] Setting ErrFile to fd 2...
	I0701 22:30:38.117888   45343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:30:38.118445   45343 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 22:30:38.118772   45343 out.go:303] Setting JSON to false
	I0701 22:30:38.120355   45343 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":791,"bootTime":1656713847,"procs":545,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 22:30:38.120441   45343 start.go:125] virtualization: kvm guest
	I0701 22:30:38.123191   45343 out.go:177] * [functional-20220701222815-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 22:30:38.124712   45343 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 22:30:38.126126   45343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 22:30:38.127531   45343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:30:38.128992   45343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 22:30:38.130261   45343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 22:30:38.131982   45343 config.go:178] Loaded profile config "functional-20220701222815-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:30:38.133111   45343 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 22:30:38.181123   45343 docker.go:137] docker version: linux-20.10.17
	I0701 22:30:38.181259   45343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:30:38.338513   45343 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-07-01 22:30:38.25142761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:30:38.338647   45343 docker.go:254] overlay module found
	I0701 22:30:38.340492   45343 out.go:177] * Using the docker driver based on existing profile
	I0701 22:30:38.341834   45343 start.go:284] selected driver: docker
	I0701 22:30:38.341851   45343 start.go:808] validating driver "docker" against &{Name:functional-20220701222815-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220701222815-10066 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:30:38.342000   45343 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 22:30:38.344346   45343 out.go:177] 
	W0701 22:30:38.345610   45343 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0701 22:30:38.346817   45343 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220701222815-10066 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220701222815-10066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220701222815-10066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (947.31131ms)

                                                
                                                
-- stdout --
	* [functional-20220701222815-10066] minikube v1.26.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 22:30:26.030892   43129 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:30:26.031037   43129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:30:26.031047   43129 out.go:309] Setting ErrFile to fd 2...
	I0701 22:30:26.031052   43129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:30:26.031612   43129 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 22:30:26.031920   43129 out.go:303] Setting JSON to false
	I0701 22:30:26.033248   43129 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":779,"bootTime":1656713847,"procs":530,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 22:30:26.033333   43129 start.go:125] virtualization: kvm guest
	I0701 22:30:26.226775   43129 out.go:177] * [functional-20220701222815-10066] minikube v1.26.0 sur Ubuntu 20.04 (kvm/amd64)
	I0701 22:30:26.688687   43129 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 22:30:26.694999   43129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 22:30:26.699690   43129 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:30:26.701601   43129 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 22:30:26.704260   43129 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 22:30:26.706113   43129 config.go:178] Loaded profile config "functional-20220701222815-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:30:26.706659   43129 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 22:30:26.757917   43129 docker.go:137] docker version: linux-20.10.17
	I0701 22:30:26.758022   43129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:30:26.895882   43129 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:42 SystemTime:2022-07-01 22:30:26.794594015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:30:26.896041   43129 docker.go:254] overlay module found
	I0701 22:30:26.899865   43129 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0701 22:30:26.900851   43129 start.go:284] selected driver: docker
	I0701 22:30:26.900863   43129 start.go:808] validating driver "docker" against &{Name:functional-20220701222815-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220701222815-10066 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0701 22:30:26.900965   43129 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 22:30:26.903135   43129 out.go:177] 
	W0701 22:30:26.904415   43129 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0701 22:30:26.905744   43129 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 status
E0701 22:30:37.063457   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (22.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220701222815-10066 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220701222815-10066 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-nbwns" [2077e179-af52-4a48-94df-c9032a368086] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-nbwns" [2077e179-af52-4a48-94df-c9032a368086] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0701 22:30:44.745037   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-nbwns" [2077e179-af52-4a48-94df-c9032a368086] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 19.005746215s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 service list: (1.80008189s)
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 service --namespace=default --https --url hello-node
functional_test.go:1475: found endpoint: https://192.168.49.2:32628
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 service hello-node --url --format={{.IP}}
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 service hello-node --url
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:32628
--- PASS: TestFunctional/parallel/ServiceCmd (22.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220701222815-10066 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220701222815-10066 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-gwktd" [cdcb3fd1-ee35-4c50-a4aa-afcf868c5635] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-gwktd" [cdcb3fd1-ee35-4c50-a4aa-afcf868c5635] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-gwktd" [cdcb3fd1-ee35-4c50-a4aa-afcf868c5635] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.00737024s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.49.2:31222
functional_test.go:1604: http://192.168.49.2:31222: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-578cdc45cb-gwktd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31222
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [97302cf8-3dd0-4990-9f77-efb8a930536b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016927695s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220701222815-10066 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220701222815-10066 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220701222815-10066 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220701222815-10066 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [1c2d97c2-f829-4f1d-b120-d6d941a088a9] Pending
helpers_test.go:342: "sp-pod" [1c2d97c2-f829-4f1d-b120-d6d941a088a9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [1c2d97c2-f829-4f1d-b120-d6d941a088a9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.006570453s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220701222815-10066 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220701222815-10066 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220701222815-10066 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [22c438c2-5629-4ba3-9f99-578f007d47bc] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [22c438c2-5629-4ba3-9f99-578f007d47bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [22c438c2-5629-4ba3-9f99-578f007d47bc] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.006962066s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220701222815-10066 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh -n functional-20220701222815-10066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 cp functional-20220701222815-10066:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1627353540/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh -n functional-20220701222815-10066 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220701222815-10066 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-67f7d69d8b-r4jv5" [520c3f94-6348-4eaf-962f-878e2fd2b877] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-r4jv5" [520c3f94-6348-4eaf-962f-878e2fd2b877] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.012631017s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;": exit status 1 (320.100995ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;": exit status 1 (259.218797ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;": exit status 1 (204.205888ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;": exit status 1 (131.385974ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220701222815-10066 exec mysql-67f7d69d8b-r4jv5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.81s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/10066/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo cat /etc/test/nested/copy/10066/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/10066.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo cat /etc/ssl/certs/10066.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/10066.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo cat /usr/share/ca-certificates/10066.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/100662.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo cat /etc/ssl/certs/100662.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/100662.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo cat /usr/share/ca-certificates/100662.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220701222815-10066 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo systemctl is-active docker": exit status 1 (453.365742ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo systemctl is-active crio": exit status 1 (416.143539ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220701222815-10066
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220701222815-10066
docker.io/kindest/kindnetd:v20220510-4929dd75
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/kube-scheduler                   | v1.24.2                         | sha256:5d7251 | 15.5MB |
| k8s.gcr.io/pause                            | 3.1                             | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | 3.7                             | sha256:221177 | 311kB  |
| docker.io/library/nginx                     | latest                          | sha256:55f4b4 | 56.7MB |
| docker.io/library/nginx                     | alpine                          | sha256:f246e6 | 10.2MB |
| docker.io/library/minikube-local-cache-test | functional-20220701222815-10066 | sha256:df0ec5 | 1.74kB |
| k8s.gcr.io/kube-apiserver                   | v1.24.2                         | sha256:d3377f | 33.8MB |
| k8s.gcr.io/kube-controller-manager          | v1.24.2                         | sha256:34cdf9 | 31MB   |
| k8s.gcr.io/kube-proxy                       | v1.24.2                         | sha256:a63454 | 39.5MB |
| k8s.gcr.io/pause                            | 3.3                             | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20220510-4929dd75              | sha256:6fb66c | 45.2MB |
| docker.io/library/mysql                     | 5.7                             | sha256:efa500 | 162MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | sha256:a4ca41 | 13.6MB |
| k8s.gcr.io/etcd                             | 3.5.3-0                         | sha256:aebe75 | 102MB  |
| k8s.gcr.io/pause                            | latest                          | sha256:350b16 | 72.3kB |
| gcr.io/google-containers/addon-resizer      | functional-20220701222815-10066 | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | sha256:6e38f4 | 9.06MB |
|---------------------------------------------|---------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls --format json:
[{"id":"sha256:1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9","repoDigests":["docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"],"repoTags":[],"size":"74611705"},{"id":"sha256:df0ec5d1222ba86e3ca2c1009e51b8b76908d7125916d2c38690ff466389f38b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220701222815-10066"],"size":"1739"},{"id":"sha256:efa50097efbdef5884e5ebaba4da5899e79609b78cd4fe91b365d5d9d3205188","repoDigests":["docker.io/library/mysql@sha256:8b4b41d530c40d77a3205c53f7ecf1026d735648d9a09777845f305953e5eff5"],"repoTags":["docker.io/library/mysql:5.7"],"size":"162489521"},{"id":"sha256:55f4b40fe486a5b734b46bb7bf28f52fa31426bf23be068c8e7b19e58d9b8deb","repoDigests":["docker.io/library/nginx@sha256:10f14ffa93f8dedf1057897b745e5ac72ac5655c299dade0aa434c71557697ea"],"repoTags":["docker.io/library/nginx:latest"],"size":"56748232"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f
709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95","
repoDigests":["docker.io/library/nginx@sha256:8e38930f0390cbd79b2d1528405fb17edcda5f4a30875ecf338ebaa598dc994e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10190737"},{"id":"sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":["k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5"],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"102143581"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df","repoDi
gests":["k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.2"],"size":"31035052"},{"id":"sha256:5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.2"],"size":"15488980"},{"id":"sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":["k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c"],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"311278"},{"id":"sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627","repoDigests":["docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c"],"repoTags":["docker.io/kindest/kindnetd:v20220510-4929dd75"],"size":"45239873"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb9
4e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220701222815-10066"],"size":"10823156"},{"id":"sha256:d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.2"],"size":"33795763"},{"id":"sha256:a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536","repoDigests":["k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f"],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.2"],"size":"39515830"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls --format yaml:
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95
repoDigests:
- docker.io/library/nginx@sha256:8e38930f0390cbd79b2d1528405fb17edcda5f4a30875ecf338ebaa598dc994e
repoTags:
- docker.io/library/nginx:alpine
size: "10190737"
- id: sha256:55f4b40fe486a5b734b46bb7bf28f52fa31426bf23be068c8e7b19e58d9b8deb
repoDigests:
- docker.io/library/nginx@sha256:10f14ffa93f8dedf1057897b745e5ac72ac5655c299dade0aa434c71557697ea
repoTags:
- docker.io/library/nginx:latest
size: "56748232"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.2
size: "33795763"
- id: sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests:
- k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c
repoTags:
- k8s.gcr.io/pause:3.7
size: "311278"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.2
size: "31035052"
- id: sha256:a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.2
size: "39515830"
- id: sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests:
- k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "102143581"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3
repoTags: []
size: "74611705"
- id: sha256:df0ec5d1222ba86e3ca2c1009e51b8b76908d7125916d2c38690ff466389f38b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220701222815-10066
size: "1739"
- id: sha256:efa50097efbdef5884e5ebaba4da5899e79609b78cd4fe91b365d5d9d3205188
repoDigests:
- docker.io/library/mysql@sha256:8b4b41d530c40d77a3205c53f7ecf1026d735648d9a09777845f305953e5eff5
repoTags:
- docker.io/library/mysql:5.7
size: "162489521"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627
repoDigests:
- docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c
repoTags:
- docker.io/kindest/kindnetd:v20220510-4929dd75
size: "45239873"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220701222815-10066
size: "10823156"
- id: sha256:5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.2
size: "15488980"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh pgrep buildkitd: exit status 1 (371.218507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image build -t localhost/my-image:functional-20220701222815-10066 testdata/build
2022/07/01 22:30:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 image build -t localhost/my-image:functional-20220701222815-10066 testdata/build: (3.883007161s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220701222815-10066 image build -t localhost/my-image:functional-20220701222815-10066 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.2s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.6s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 2.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:0ae1b9b0baa79ce91da7e58c52d967e25749ebb72109884c7328abf4f2b35785 0.0s done
#8 exporting config sha256:3a5361dacce85f1a308284ef2abeb2730e670a88a4df88c881ea396eaff676c9
#8 exporting config sha256:3a5361dacce85f1a308284ef2abeb2730e670a88a4df88c881ea396eaff676c9 done
#8 naming to localhost/my-image:functional-20220701222815-10066 done
#8 DONE 0.1s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls
E0701 22:30:54.985355   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.018790623s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220701222815-10066
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "428.97224ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1324: Took "111.677566ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "503.564588ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1374: Took "110.240396ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220701222815-10066 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220701222815-10066 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [0f2ac83d-520f-404f-95e4-ed95974c448a] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [0f2ac83d-520f-404f-95e4-ed95974c448a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [0f2ac83d-520f-404f-95e4-ed95974c448a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.012588633s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220701222815-10066

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220701222815-10066: (3.844938759s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220701222815-10066

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220701222815-10066: (5.148702274s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220701222815-10066
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220701222815-10066

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220701222815-10066: (6.848477941s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220701222815-10066 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.105.168.166 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220701222815-10066 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220701222815-10066 /tmp/TestFunctionalparallelMountCmdany-port103759360/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1656714626910082938" to /tmp/TestFunctionalparallelMountCmdany-port103759360/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1656714626910082938" to /tmp/TestFunctionalparallelMountCmdany-port103759360/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1656714626910082938" to /tmp/TestFunctionalparallelMountCmdany-port103759360/001/test-1656714626910082938
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (425.563106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  1 22:30 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  1 22:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  1 22:30 test-1656714626910082938
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh cat /mount-9p/test-1656714626910082938
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220701222815-10066 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [3e824941-a8eb-41f9-840d-3cac734fe6de] Pending
helpers_test.go:342: "busybox-mount" [3e824941-a8eb-41f9-840d-3cac734fe6de] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [3e824941-a8eb-41f9-840d-3cac734fe6de] Running
E0701 22:30:34.503823   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:30:34.509530   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:30:34.519770   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:30:34.539922   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:30:34.580162   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:30:34.660438   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:30:34.820849   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:30:35.141884   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [3e824941-a8eb-41f9-840d-3cac734fe6de] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [3e824941-a8eb-41f9-840d-3cac734fe6de] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00664085s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220701222815-10066 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220701222815-10066 /tmp/TestFunctionalparallelMountCmdany-port103759360/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image save gcr.io/google-containers/addon-resizer:functional-20220701222815-10066 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 image save gcr.io/google-containers/addon-resizer:functional-20220701222815-10066 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.791899225s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image rm gcr.io/google-containers/addon-resizer:functional-20220701222815-10066

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.92419243s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220701222815-10066
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220701222815-10066
E0701 22:30:35.782968   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220701222815-10066 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220701222815-10066: (1.038986903s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220701222815-10066
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220701222815-10066 /tmp/TestFunctionalparallelMountCmdspecific-port2949919660/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (405.31179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0701 22:30:39.624367   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220701222815-10066 /tmp/TestFunctionalparallelMountCmdspecific-port2949919660/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh "sudo umount -f /mount-9p": exit status 1 (363.74845ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220701222815-10066 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220701222815-10066 /tmp/TestFunctionalparallelMountCmdspecific-port2949919660/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220701222815-10066
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220701222815-10066
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220701222815-10066
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (75.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220701223107-10066 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0701 22:31:15.466040   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:31:56.426510   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220701223107-10066 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m15.299266205s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (75.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 addons enable ingress --alsologtostderr -v=5: (9.177230088s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:164: (dbg) Run:  kubectl --context ingress-addon-legacy-20220701223107-10066 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:164: (dbg) Done: kubectl --context ingress-addon-legacy-20220701223107-10066 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.199760895s)
addons_test.go:184: (dbg) Run:  kubectl --context ingress-addon-legacy-20220701223107-10066 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-20220701223107-10066 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [16c0e17f-b644-4c64-b89f-68524fa5804b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [16c0e17f-b644-4c64-b89f-68524fa5804b] Running
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.006136688s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context ingress-addon-legacy-20220701223107-10066 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 addons disable ingress-dns --alsologtostderr -v=1: (4.703990113s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 addons disable ingress --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220701223107-10066 addons disable ingress --alsologtostderr -v=1: (7.275446492s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.47s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220701223309-10066 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0701 22:33:18.346707   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220701223309-10066 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (45.752822834s)
--- PASS: TestJSONOutput/start/Command (45.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220701223309-10066 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220701223309-10066 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (20.23s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220701223309-10066 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220701223309-10066 --output=json --user=testUser: (20.224969145s)
--- PASS: TestJSONOutput/stop/Command (20.23s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.31s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220701223421-10066 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220701223421-10066 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.871649ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a813b870-ecf0-4ce7-9cd5-e27b537a17a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220701223421-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e096ea05-d819-460b-b10b-8fd0609cfc2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14483"}}
	{"specversion":"1.0","id":"318250de-cee0-40aa-8e4c-f65019e10557","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea729b4b-e734-41ed-8c64-77e2165b815d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig"}}
	{"specversion":"1.0","id":"c3dd6df1-002a-4a21-a0f6-ac85838d3887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube"}}
	{"specversion":"1.0","id":"1b81afc8-67f4-41df-a249-ecae9046eae3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fc93ffa3-39f3-4ab7-890b-52f2703bd817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220701223421-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220701223421-10066
--- PASS: TestErrorJSONOutput (0.31s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220701223422-10066 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220701223422-10066 --network=: (32.244038155s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220701223422-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220701223422-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220701223422-10066: (2.288642889s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.57s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220701223456-10066 --network=bridge
E0701 22:35:13.509515   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:13.514781   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:13.525020   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:13.545250   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:13.585509   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:13.665824   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:13.826204   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:14.146817   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:14.787733   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:16.068050   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:18.629753   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:23.750564   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220701223456-10066 --network=bridge: (27.864368978s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220701223456-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220701223456-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220701223456-10066: (2.135838145s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.03s)

                                                
                                    
x
+
TestKicExistingNetwork (30.87s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220701223526-10066 --network=existing-network
E0701 22:35:33.991528   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:35:34.503272   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:35:54.472779   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220701223526-10066 --network=existing-network: (28.598508701s)
helpers_test.go:175: Cleaning up "existing-network-20220701223526-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220701223526-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220701223526-10066: (2.058397525s)
--- PASS: TestKicExistingNetwork (30.87s)

                                                
                                    
x
+
TestKicCustomSubnet (29.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220701223557-10066 --subnet=192.168.60.0/24
E0701 22:36:02.186982   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220701223557-10066 --subnet=192.168.60.0/24: (27.322269109s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220701223557-10066 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220701223557-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220701223557-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220701223557-10066: (2.234150363s)
--- PASS: TestKicCustomSubnet (29.59s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (53.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220701223627-10066 --driver=docker  --container-runtime=containerd
E0701 22:36:35.433940   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-20220701223627-10066 --driver=docker  --container-runtime=containerd: (23.111420185s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220701223627-10066 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-20220701223627-10066 --driver=docker  --container-runtime=containerd: (23.99313304s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-20220701223627-10066
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-20220701223627-10066
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220701223627-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220701223627-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220701223627-10066: (2.27439288s)
helpers_test.go:175: Cleaning up "first-20220701223627-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220701223627-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220701223627-10066: (2.353230166s)
--- PASS: TestMinikubeProfile (53.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220701223720-10066 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220701223720-10066 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.180905066s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220701223720-10066 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220701223720-10066 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220701223720-10066 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.844939487s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220701223720-10066 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.81s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220701223720-10066 --alsologtostderr -v=5
E0701 22:37:32.035258   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:32.040568   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:32.050816   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:32.071072   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:32.111347   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:32.191650   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:32.352070   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:32.672469   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220701223720-10066 --alsologtostderr -v=5: (1.808662009s)
--- PASS: TestMountStart/serial/DeleteFirst (1.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220701223720-10066 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220701223720-10066
E0701 22:37:33.313225   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220701223720-10066: (1.268811776s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220701223720-10066
E0701 22:37:34.593861   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:37.154009   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220701223720-10066: (5.435264269s)
--- PASS: TestMountStart/serial/RestartStopped (6.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220701223720-10066 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220701223743-10066 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0701 22:37:52.515822   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:37:57.354110   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:38:12.996810   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:38:53.957786   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220701223743-10066 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m31.999532893s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- rollout status deployment/busybox: (1.48914168s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-hr5mb -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-qb6qm -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-hr5mb -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-qb6qm -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-hr5mb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-qb6qm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-hr5mb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-hr5mb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-qb6qm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220701223743-10066 -- exec busybox-d46db594c-qb6qm -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (39.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220701223743-10066 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220701223743-10066 -v 3 --alsologtostderr: (38.797197064s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (39.57s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp testdata/cp-test.txt multinode-20220701223743-10066:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3748412467/001/cp-test_multinode-20220701223743-10066.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066:/home/docker/cp-test.txt multinode-20220701223743-10066-m02:/home/docker/cp-test_multinode-20220701223743-10066_multinode-20220701223743-10066-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m02 "sudo cat /home/docker/cp-test_multinode-20220701223743-10066_multinode-20220701223743-10066-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066:/home/docker/cp-test.txt multinode-20220701223743-10066-m03:/home/docker/cp-test_multinode-20220701223743-10066_multinode-20220701223743-10066-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m03 "sudo cat /home/docker/cp-test_multinode-20220701223743-10066_multinode-20220701223743-10066-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp testdata/cp-test.txt multinode-20220701223743-10066-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3748412467/001/cp-test_multinode-20220701223743-10066-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066-m02:/home/docker/cp-test.txt multinode-20220701223743-10066:/home/docker/cp-test_multinode-20220701223743-10066-m02_multinode-20220701223743-10066.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066 "sudo cat /home/docker/cp-test_multinode-20220701223743-10066-m02_multinode-20220701223743-10066.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066-m02:/home/docker/cp-test.txt multinode-20220701223743-10066-m03:/home/docker/cp-test_multinode-20220701223743-10066-m02_multinode-20220701223743-10066-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m03 "sudo cat /home/docker/cp-test_multinode-20220701223743-10066-m02_multinode-20220701223743-10066-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp testdata/cp-test.txt multinode-20220701223743-10066-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3748412467/001/cp-test_multinode-20220701223743-10066-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066-m03:/home/docker/cp-test.txt multinode-20220701223743-10066:/home/docker/cp-test_multinode-20220701223743-10066-m03_multinode-20220701223743-10066.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066 "sudo cat /home/docker/cp-test_multinode-20220701223743-10066-m03_multinode-20220701223743-10066.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 cp multinode-20220701223743-10066-m03:/home/docker/cp-test.txt multinode-20220701223743-10066-m02:/home/docker/cp-test_multinode-20220701223743-10066-m03_multinode-20220701223743-10066-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 ssh -n multinode-20220701223743-10066-m02 "sudo cat /home/docker/cp-test_multinode-20220701223743-10066-m03_multinode-20220701223743-10066-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 node stop m03
E0701 22:40:13.508953   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220701223743-10066 node stop m03: (1.284735926s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220701223743-10066 status: exit status 7 (600.009371ms)

                                                
                                                
-- stdout --
	multinode-20220701223743-10066
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220701223743-10066-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220701223743-10066-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --alsologtostderr: exit status 7 (603.239853ms)

                                                
                                                
-- stdout --
	multinode-20220701223743-10066
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220701223743-10066-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220701223743-10066-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 22:40:14.230152  100789 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:40:14.230271  100789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:40:14.230284  100789 out.go:309] Setting ErrFile to fd 2...
	I0701 22:40:14.230291  100789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:40:14.230396  100789 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 22:40:14.230561  100789 out.go:303] Setting JSON to false
	I0701 22:40:14.230581  100789 mustload.go:65] Loading cluster: multinode-20220701223743-10066
	I0701 22:40:14.230880  100789 config.go:178] Loaded profile config "multinode-20220701223743-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:40:14.230895  100789 status.go:253] checking status of multinode-20220701223743-10066 ...
	I0701 22:40:14.231232  100789 cli_runner.go:164] Run: docker container inspect multinode-20220701223743-10066 --format={{.State.Status}}
	I0701 22:40:14.263060  100789 status.go:328] multinode-20220701223743-10066 host status = "Running" (err=<nil>)
	I0701 22:40:14.263089  100789 host.go:66] Checking if "multinode-20220701223743-10066" exists ...
	I0701 22:40:14.263326  100789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220701223743-10066
	I0701 22:40:14.294066  100789 host.go:66] Checking if "multinode-20220701223743-10066" exists ...
	I0701 22:40:14.294352  100789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 22:40:14.294401  100789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220701223743-10066
	I0701 22:40:14.325670  100789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/multinode-20220701223743-10066/id_rsa Username:docker}
	I0701 22:40:14.406966  100789 ssh_runner.go:195] Run: systemctl --version
	I0701 22:40:14.410704  100789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 22:40:14.419323  100789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:40:14.523661  100789 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-01 22:40:14.450472248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:40:14.524168  100789 kubeconfig.go:92] found "multinode-20220701223743-10066" server: "https://192.168.58.2:8443"
	I0701 22:40:14.524192  100789 api_server.go:165] Checking apiserver status ...
	I0701 22:40:14.524230  100789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 22:40:14.533174  100789 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	I0701 22:40:14.540593  100789 api_server.go:181] apiserver freezer: "4:freezer:/docker/d2c4c2d0a348488b21acbd900f3c2b4f936e8f6f3f1f3c946e92419f9e8fb12b/kubepods/burstable/pod96db22c53ead48e21f34e768503e7ac6/9b9d9a4a6c097b23b8d7834f3b3efab38d9e4f71daefe2d35c589be0630f1254"
	I0701 22:40:14.540650  100789 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d2c4c2d0a348488b21acbd900f3c2b4f936e8f6f3f1f3c946e92419f9e8fb12b/kubepods/burstable/pod96db22c53ead48e21f34e768503e7ac6/9b9d9a4a6c097b23b8d7834f3b3efab38d9e4f71daefe2d35c589be0630f1254/freezer.state
	I0701 22:40:14.546724  100789 api_server.go:203] freezer state: "THAWED"
	I0701 22:40:14.546765  100789 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0701 22:40:14.551558  100789 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0701 22:40:14.551578  100789 status.go:419] multinode-20220701223743-10066 apiserver status = Running (err=<nil>)
	I0701 22:40:14.551587  100789 status.go:255] multinode-20220701223743-10066 status: &{Name:multinode-20220701223743-10066 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 22:40:14.551603  100789 status.go:253] checking status of multinode-20220701223743-10066-m02 ...
	I0701 22:40:14.551829  100789 cli_runner.go:164] Run: docker container inspect multinode-20220701223743-10066-m02 --format={{.State.Status}}
	I0701 22:40:14.587339  100789 status.go:328] multinode-20220701223743-10066-m02 host status = "Running" (err=<nil>)
	I0701 22:40:14.587364  100789 host.go:66] Checking if "multinode-20220701223743-10066-m02" exists ...
	I0701 22:40:14.587680  100789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220701223743-10066-m02
	I0701 22:40:14.620092  100789 host.go:66] Checking if "multinode-20220701223743-10066-m02" exists ...
	I0701 22:40:14.620347  100789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 22:40:14.620387  100789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220701223743-10066-m02
	I0701 22:40:14.650394  100789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/multinode-20220701223743-10066-m02/id_rsa Username:docker}
	I0701 22:40:14.730958  100789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 22:40:14.739660  100789 status.go:255] multinode-20220701223743-10066-m02 status: &{Name:multinode-20220701223743-10066-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0701 22:40:14.739689  100789 status.go:253] checking status of multinode-20220701223743-10066-m03 ...
	I0701 22:40:14.739923  100789 cli_runner.go:164] Run: docker container inspect multinode-20220701223743-10066-m03 --format={{.State.Status}}
	I0701 22:40:14.772275  100789 status.go:328] multinode-20220701223743-10066-m03 host status = "Stopped" (err=<nil>)
	I0701 22:40:14.772295  100789 status.go:341] host is not running, skipping remaining checks
	I0701 22:40:14.772300  100789 status.go:255] multinode-20220701223743-10066-m03 status: &{Name:multinode-20220701223743-10066-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 node start m03 --alsologtostderr
E0701 22:40:15.878434   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:40:34.503173   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 22:40:41.194439   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220701223743-10066 node start m03 --alsologtostderr: (30.24348909s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (155.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220701223743-10066
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220701223743-10066
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220701223743-10066: (41.288976744s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220701223743-10066 --wait=true -v=8 --alsologtostderr
E0701 22:42:32.034712   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 22:42:59.720540   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220701223743-10066 --wait=true -v=8 --alsologtostderr: (1m54.514033414s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220701223743-10066
--- PASS: TestMultiNode/serial/RestartKeepsNodes (155.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220701223743-10066 node delete m03: (4.42223613s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220701223743-10066 stop: (40.067911206s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220701223743-10066 status: exit status 7 (129.064527ms)

                                                
                                                
-- stdout --
	multinode-20220701223743-10066
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220701223743-10066-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --alsologtostderr: exit status 7 (125.531827ms)

                                                
                                                
-- stdout --
	multinode-20220701223743-10066
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220701223743-10066-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 22:44:07.175459  111210 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:44:07.175567  111210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:44:07.175577  111210 out.go:309] Setting ErrFile to fd 2...
	I0701 22:44:07.175581  111210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:44:07.175680  111210 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 22:44:07.175819  111210 out.go:303] Setting JSON to false
	I0701 22:44:07.175836  111210 mustload.go:65] Loading cluster: multinode-20220701223743-10066
	I0701 22:44:07.176143  111210 config.go:178] Loaded profile config "multinode-20220701223743-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0701 22:44:07.176159  111210 status.go:253] checking status of multinode-20220701223743-10066 ...
	I0701 22:44:07.176500  111210 cli_runner.go:164] Run: docker container inspect multinode-20220701223743-10066 --format={{.State.Status}}
	I0701 22:44:07.209186  111210 status.go:328] multinode-20220701223743-10066 host status = "Stopped" (err=<nil>)
	I0701 22:44:07.209214  111210 status.go:341] host is not running, skipping remaining checks
	I0701 22:44:07.209223  111210 status.go:255] multinode-20220701223743-10066 status: &{Name:multinode-20220701223743-10066 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 22:44:07.209258  111210 status.go:253] checking status of multinode-20220701223743-10066-m02 ...
	I0701 22:44:07.209502  111210 cli_runner.go:164] Run: docker container inspect multinode-20220701223743-10066-m02 --format={{.State.Status}}
	I0701 22:44:07.239389  111210 status.go:328] multinode-20220701223743-10066-m02 host status = "Stopped" (err=<nil>)
	I0701 22:44:07.239410  111210 status.go:341] host is not running, skipping remaining checks
	I0701 22:44:07.239418  111210 status.go:255] multinode-20220701223743-10066-m02 status: &{Name:multinode-20220701223743-10066-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220701223743-10066 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0701 22:45:13.509750   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220701223743-10066 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m22.488244402s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220701223743-10066 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220701223743-10066
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220701223743-10066-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220701223743-10066-m02 --driver=docker  --container-runtime=containerd: exit status 14 (78.713493ms)

                                                
                                                
-- stdout --
	* [multinode-20220701223743-10066-m02] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220701223743-10066-m02' is duplicated with machine name 'multinode-20220701223743-10066-m02' in profile 'multinode-20220701223743-10066'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220701223743-10066-m03 --driver=docker  --container-runtime=containerd
E0701 22:45:34.503336   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220701223743-10066-m03 --driver=docker  --container-runtime=containerd: (23.803234166s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220701223743-10066
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220701223743-10066: exit status 80 (346.328192ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220701223743-10066
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220701223743-10066-m03 already exists in multinode-20220701223743-10066-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220701223743-10066-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220701223743-10066-m03: (2.261382819s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.55s)

                                                
                                    
x
+
TestPreload (115.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220701224601-10066 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0701 22:46:57.547198   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220701224601-10066 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m10.761285168s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220701224601-10066 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220701224601-10066 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.048847356s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220701224601-10066 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
E0701 22:47:32.034385   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220701224601-10066 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (40.606506563s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220701224601-10066 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220701224601-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220701224601-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220701224601-10066: (2.513000585s)
--- PASS: TestPreload (115.31s)

                                                
                                    
x
+
TestScheduledStopUnix (100.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220701224756-10066 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220701224756-10066 --memory=2048 --driver=docker  --container-runtime=containerd: (23.819983014s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220701224756-10066 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220701224756-10066 -n scheduled-stop-20220701224756-10066
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220701224756-10066 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220701224756-10066 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220701224756-10066 -n scheduled-stop-20220701224756-10066
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220701224756-10066
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220701224756-10066 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220701224756-10066
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220701224756-10066: exit status 7 (96.269165ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220701224756-10066
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220701224756-10066 -n scheduled-stop-20220701224756-10066
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220701224756-10066 -n scheduled-stop-20220701224756-10066: exit status 7 (94.527645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220701224756-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220701224756-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220701224756-10066: (5.107450409s)
--- PASS: TestScheduledStopUnix (100.76s)

                                                
                                    
x
+
TestInsufficientStorage (16.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220701224937-10066 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220701224937-10066 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.845140175s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dc0e4c58-9a38-414b-8be8-545f7c7b2a39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220701224937-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b5fa4ec2-770f-4665-ae7a-5744c22782b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14483"}}
	{"specversion":"1.0","id":"01b6c72d-5955-462a-bace-76716a145071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"74af98cf-463d-4917-95ba-7a76b9c92466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig"}}
	{"specversion":"1.0","id":"4cad9203-6f6a-45cd-89eb-ffeeb65057bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube"}}
	{"specversion":"1.0","id":"03157ce4-ab54-4e89-84bc-6ba73734cdb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"039e1062-da1d-44c1-b14c-b28f28490c26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4e36026b-70ca-4a77-a05e-f9aaa27c8a73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bc301e29-04dd-4363-87d7-6fa487fc8753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"683c8a38-704a-4071-b000-73de5eb86a9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1db9910b-5e5f-4267-88a1-0ae4d0f7636a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220701224937-10066 in cluster insufficient-storage-20220701224937-10066","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1812308-fc78-4a77-9b4f-151d78d21902","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"431d39c7-3d8d-4604-b892-1355eb2bec2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6dd828c-6b34-4b28-bb99-772f78a87cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220701224937-10066 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220701224937-10066 --output=json --layout=cluster: exit status 7 (356.651602ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220701224937-10066","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220701224937-10066","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0701 22:49:47.458261  132129 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220701224937-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220701224937-10066 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220701224937-10066 --output=json --layout=cluster: exit status 7 (359.443589ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220701224937-10066","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220701224937-10066","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0701 22:49:47.819210  132237 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220701224937-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	E0701 22:49:47.827148  132237 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/insufficient-storage-20220701224937-10066/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220701224937-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220701224937-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220701224937-10066: (6.041498761s)
--- PASS: TestInsufficientStorage (16.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1150418804.exe start -p running-upgrade-20220701225317-10066 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.1150418804.exe start -p running-upgrade-20220701225317-10066 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.885237184s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220701225317-10066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220701225317-10066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.808272458s)
helpers_test.go:175: Cleaning up "running-upgrade-20220701225317-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220701225317-10066

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220701225317-10066: (2.461473952s)
--- PASS: TestRunningBinaryUpgrade (94.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.49s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.4012718757.exe start -p missing-upgrade-20220701225053-10066 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.4012718757.exe start -p missing-upgrade-20220701225053-10066 --memory=2200 --driver=docker  --container-runtime=containerd: (1m25.117206618s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220701225053-10066
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220701225053-10066: (11.519956772s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220701225053-10066
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220701225053-10066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0701 22:52:32.034604   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220701225053-10066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.914754399s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220701225053-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220701225053-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220701225053-10066: (3.468472871s)
--- PASS: TestMissingContainerUpgrade (144.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (115.162015ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220701224953-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --driver=docker  --container-runtime=containerd: (46.345555298s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220701224953-10066 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.3136400043.exe start -p stopped-upgrade-20220701224953-10066 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0701 22:50:13.508928   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 22:50:34.503796   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.3136400043.exe start -p stopped-upgrade-20220701224953-10066 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (53.117683804s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.3136400043.exe -p stopped-upgrade-20220701224953-10066 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.3136400043.exe -p stopped-upgrade-20220701224953-10066 stop: (12.409342311s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220701224953-10066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220701224953-10066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.416298639s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.089550403s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220701224953-10066 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220701224953-10066 status -o json: exit status 2 (474.010776ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220701224953-10066","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220701224953-10066
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220701224953-10066: (3.670547639s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.73855346s)
--- PASS: TestNoKubernetes/serial/Start (6.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220701224953-10066 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220701224953-10066 "sudo systemctl is-active --quiet service kubelet": exit status 1 (526.56956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.935824782s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220701224953-10066
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220701224953-10066: (1.982192284s)
--- PASS: TestNoKubernetes/serial/Stop (1.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220701224953-10066 --driver=docker  --container-runtime=containerd: (6.406271645s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220701224953-10066 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220701224953-10066 "sudo systemctl is-active --quiet service kubelet": exit status 1 (425.386634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220701225120-10066 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220701225120-10066 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (467.402799ms)

                                                
                                                
-- stdout --
	* [false-20220701225120-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14483
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 22:51:20.760176  153853 out.go:296] Setting OutFile to fd 1 ...
	I0701 22:51:20.766377  153853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:51:20.766402  153853 out.go:309] Setting ErrFile to fd 2...
	I0701 22:51:20.766410  153853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 22:51:20.767110  153853 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
	I0701 22:51:20.767582  153853 out.go:303] Setting JSON to false
	I0701 22:51:20.769334  153853 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2034,"bootTime":1656713847,"procs":564,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 22:51:20.769457  153853 start.go:125] virtualization: kvm guest
	I0701 22:51:20.772944  153853 out.go:177] * [false-20220701225120-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0701 22:51:20.777140  153853 out.go:177]   - MINIKUBE_LOCATION=14483
	I0701 22:51:20.777300  153853 notify.go:193] Checking for updates...
	I0701 22:51:20.780524  153853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 22:51:20.782169  153853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
	I0701 22:51:20.783624  153853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
	I0701 22:51:20.785148  153853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 22:51:20.786971  153853 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225105-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0701 22:51:20.787068  153853 config.go:178] Loaded profile config "missing-upgrade-20220701225053-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I0701 22:51:20.787197  153853 config.go:178] Loaded profile config "stopped-upgrade-20220701224953-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0701 22:51:20.787253  153853 driver.go:360] Setting default libvirt URI to qemu:///system
	I0701 22:51:20.865354  153853 docker.go:137] docker version: linux-20.10.17
	I0701 22:51:20.865458  153853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 22:51:21.087166  153853 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:true NGoroutines:125 SystemTime:2022-07-01 22:51:20.927820005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0701 22:51:21.087317  153853 docker.go:254] overlay module found
	I0701 22:51:21.090019  153853 out.go:177] * Using the docker driver based on user configuration
	I0701 22:51:21.091379  153853 start.go:284] selected driver: docker
	I0701 22:51:21.091395  153853 start.go:808] validating driver "docker" against <nil>
	I0701 22:51:21.091413  153853 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 22:51:21.125339  153853 out.go:177] 
	W0701 22:51:21.126872  153853 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0701 22:51:21.128342  153853 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220701225120-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220701225120-10066
--- PASS: TestNetworkPlugins/group/false (0.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220701224953-10066
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestPause/serial/Start (62.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220701225326-10066 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0701 22:53:55.081622   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220701225326-10066 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m2.140427753s)
--- PASS: TestPause/serial/Start (62.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220701225326-10066 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220701225326-10066 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.132747484s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.14s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220701225326-10066 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220701225326-10066 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220701225326-10066 --output=json --layout=cluster: exit status 2 (440.728244ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220701225326-10066","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220701225326-10066","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220701225326-10066 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220701225326-10066 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220701225326-10066 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220701225326-10066 --alsologtostderr -v=5: (2.856525102s)
--- PASS: TestPause/serial/DeletePaused (2.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.373208783s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220701225326-10066
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220701225326-10066: exit status 1 (37.328296ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220701225326-10066

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220701225119-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220701225119-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (58.603315243s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220701225120-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd
E0701 22:55:13.510265   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220701225120-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (49.745962714s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220701225121-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd
E0701 22:55:34.503327   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p calico-20220701225121-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.541553329s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-5h9g9" [1cb1629f-b510-4a8a-a19c-1ed57b23548f] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014822713s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220701225120-10066 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220701225120-10066 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-kq2dd" [908c9488-eacf-4b9e-83ba-0f715990d19a] Pending
helpers_test.go:342: "netcat-869c55b6dc-kq2dd" [908c9488-eacf-4b9e-83ba-0f715990d19a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-kq2dd" [908c9488-eacf-4b9e-83ba-0f715990d19a] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005980593s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220701225119-10066 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220701225119-10066 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-zzvpw" [afa6af28-3b27-4a3f-97e8-82954252e5c2] Pending
helpers_test.go:342: "netcat-869c55b6dc-zzvpw" [afa6af28-3b27-4a3f-97e8-82954252e5c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-zzvpw" [afa6af28-3b27-4a3f-97e8-82954252e5c2] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.006217945s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220701225120-10066 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220701225120-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220701225120-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220701225119-10066 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220701225119-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220701225119-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220701225120-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220701225120-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (58.89545863s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220701225120-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220701225120-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (42.507723836s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-8r57j" [08f52bf6-c4ea-408e-b0d7-d43c67e5cc53] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.014565374s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220701225120-10066 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20220701225121-10066 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220701225120-10066 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-xjzsh" [e59fddb0-2736-4bae-95ae-fd432f6cb766] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-xjzsh" [e59fddb0-2736-4bae-95ae-fd432f6cb766] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.007069114s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220701225121-10066 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-nf4s4" [4a7efcc4-6838-4844-9c1f-6df43e1f4e57] Pending
helpers_test.go:342: "netcat-869c55b6dc-nf4s4" [4a7efcc4-6838-4844-9c1f-6df43e1f4e57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-nf4s4" [4a7efcc4-6838-4844-9c1f-6df43e1f4e57] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.005917703s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220701225121-10066 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220701225121-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220701225121-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220701225120-10066 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220701225120-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220701225120-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (72.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220701225121-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220701225121-10066 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m12.490802369s)
--- PASS: TestNetworkPlugins/group/cilium/Start (72.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (338.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220701225700-10066 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220701225700-10066 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (5m38.356788158s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (338.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220701225120-10066 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220701225120-10066 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-f86v9" [b120c207-7cf4-4ea6-b071-5ec4c9d3156c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-f86v9" [b120c207-7cf4-4ea6-b071-5ec4c9d3156c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.00578103s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220701225120-10066 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220701225120-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220701225120-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-456zg" [c9c3290c-6276-40a0-917b-4810304c057b] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014349078s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220701225121-10066 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220701225121-10066 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-wl8s7" [78c6ad88-ed1d-4fde-85ae-c5f01f3df55b] Pending
helpers_test.go:342: "netcat-869c55b6dc-wl8s7" [78c6ad88-ed1d-4fde-85ae-c5f01f3df55b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-wl8s7" [78c6ad88-ed1d-4fde-85ae-c5f01f3df55b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.006500834s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220701225121-10066 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220701225121-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220701225121-10066 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.11s)
E0701 23:07:10.107244   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:07:15.379580   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:07:28.558633   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:07:32.034359   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/ingress-addon-legacy-20220701223107-10066/client.crt: no such file or directory
E0701 23:08:11.919079   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:08:16.555068   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
E0701 23:08:39.605450   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220701225830-10066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220701225830-10066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (57.236943049s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220701225830-10066 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [804c079c-55c3-4eba-9228-4e79466bd63b] Pending
helpers_test.go:342: "busybox" [804c079c-55c3-4eba-9228-4e79466bd63b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [804c079c-55c3-4eba-9228-4e79466bd63b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.012283986s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220701225830-10066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220701225830-10066 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220701225830-10066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220701225830-10066 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220701225830-10066 --alsologtostderr -v=3: (20.147035018s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066: exit status 7 (101.490321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220701225830-10066 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (322.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220701225830-10066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0701 23:00:13.508913   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220701225830-10066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (5m22.231260666s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (322.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220701225700-10066 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [6b277f37-beb7-40a1-a3f1-98e7b5d8f1d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [6b277f37-beb7-40a1-a3f1-98e7b5d8f1d7] Running
E0701 23:02:41.836259   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.010568624s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220701225700-10066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220701225700-10066 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220701225700-10066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220701225700-10066 --alsologtostderr -v=3
E0701 23:03:04.346661   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220701225700-10066 --alsologtostderr -v=3: (20.165483475s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066: exit status 7 (108.610459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220701225700-10066 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (628.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220701225700-10066 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0701 23:03:09.618254   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:03:11.918203   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:11.923460   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:11.937987   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:11.958242   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:11.998491   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:12.079386   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:12.239973   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:12.560276   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:13.200974   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:14.481392   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:17.042355   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:22.163061   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:22.797369   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
E0701 23:03:27.312006   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:03:32.403744   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:03:35.699993   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:03:37.547793   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
E0701 23:03:52.884101   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:04:26.266870   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:04:31.538596   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:04:33.844770   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:04:44.717843   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220701225700-10066 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (10m28.323625471s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (628.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-84fp6" [a984c558-4813-44d4-bea9-7b0a776827a1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011133091s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-84fp6" [a984c558-4813-44d4-bea9-7b0a776827a1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006514333s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220701225830-10066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220701225830-10066 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220701225830-10066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066: exit status 2 (399.357974ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066: exit status 2 (393.028137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220701225830-10066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220701225830-10066 -n embed-certs-20220701225830-10066
E0701 23:05:34.503558   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/addons-20220701222350-10066/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220701230537-10066 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0701 23:05:43.467780   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
E0701 23:05:51.855323   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
E0701 23:05:55.765253   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/cilium-20220701225121-10066/client.crt: no such file or directory
E0701 23:06:11.152454   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kindnet-20220701225120-10066/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220701230537-10066 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (35.841261106s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220701230537-10066 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220701230537-10066 --alsologtostderr -v=3
E0701 23:06:19.540183   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/auto-20220701225119-10066/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220701230537-10066 --alsologtostderr -v=3: (20.15072725s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066: exit status 7 (100.525142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220701230537-10066 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220701230537-10066 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0701 23:06:42.423076   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/calico-20220701225121-10066/client.crt: no such file or directory
E0701 23:06:47.697350   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/bridge-20220701225120-10066/client.crt: no such file or directory
E0701 23:07:00.872876   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/enable-default-cni-20220701225120-10066/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220701230537-10066 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (29.077536781s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220701230537-10066 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220701230537-10066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066: exit status 2 (399.073471ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066: exit status 2 (397.585259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220701230537-10066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220701230537-10066 -n newest-cni-20220701230537-10066
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220701225718-10066 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220701225718-10066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220701225718-10066 --alsologtostderr -v=3
E0701 23:10:13.509004   10066 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/functional-20220701222815-10066/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220701225718-10066 --alsologtostderr -v=3: (20.11081321s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220701225718-10066 -n no-preload-20220701225718-10066: exit status 7 (98.200166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220701225718-10066 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220701230032-10066 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220701230032-10066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220701230032-10066 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220701230032-10066 --alsologtostderr -v=3: (20.164316231s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220701230032-10066 -n default-k8s-different-port-20220701230032-10066: exit status 7 (114.430583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220701230032-10066 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-kvb25" [bd5c7273-9787-44fc-8bfe-b5ec02aa2335] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01149041s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-kvb25" [bd5c7273-9787-44fc-8bfe-b5ec02aa2335] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006551095s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220701225700-10066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220701225700-10066 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220701225700-10066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066: exit status 2 (387.335815ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066: exit status 2 (386.209496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220701225700-10066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220701225700-10066 -n old-k8s-version-20220701225700-10066
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    

Test skip (23/279)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.24.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20220701225119-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220701225119-10066
--- SKIP: TestNetworkPlugins/group/kubenet (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220701225120-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220701225120-10066
--- SKIP: TestNetworkPlugins/group/flannel (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220701225121-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-20220701225121-10066
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220701230032-10066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220701230032-10066
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
Copied to clipboard